Feb 16 14:53:24 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 14:53:24 crc restorecon[4699]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:24 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:53:25 crc restorecon[4699]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 14:53:26 crc kubenswrapper[4705]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.209675 4705 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217862 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217916 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217928 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217939 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217953 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217965 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217977 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217987 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.217997 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218008 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218018 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218031 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218043 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218053 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218063 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218072 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218081 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218090 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218100 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218109 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218118 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218127 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218135 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218143 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218151 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218159 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218166 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218177 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218187 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218196 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218206 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218214 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218223 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218249 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218258 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218266 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218274 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218283 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218291 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218299 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218308 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218316 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218325 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218333 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218342 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218350 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218359 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218399 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218407 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218415 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218422 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218432 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218440 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218447 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218456 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218463 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218472 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218480 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218487 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218495 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218503 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218511 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218521 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218529 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218537 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218545 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218553 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218562 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218570 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218578 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.218586 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218814 4705 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218840 4705 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218860 4705 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218873 4705 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218884 4705 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218894 4705 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218906 4705 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218917 4705 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218926 4705 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218935 4705 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218945 4705 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218956 4705 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218965 4705 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218974 4705 flags.go:64] FLAG: --cgroup-root="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218983 4705 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.218992 4705 flags.go:64] FLAG: --client-ca-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219001 4705 flags.go:64] FLAG: --cloud-config="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219010 4705 flags.go:64] FLAG: --cloud-provider="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219022 4705 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219035 4705 flags.go:64] FLAG: --cluster-domain="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219043 4705 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219053 4705 flags.go:64] FLAG: --config-dir="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219062 4705 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219090 4705 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219111 4705 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219121 4705 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219130 4705 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219141 4705 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219150 4705 flags.go:64] FLAG: --contention-profiling="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219159 4705 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219168 4705 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219178 4705 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219187 4705 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219198 4705 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219207 4705 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219216 4705 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219225 4705 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219234 4705 flags.go:64] FLAG: --enable-server="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219243 4705 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219256 4705 flags.go:64] FLAG: --event-burst="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219265 4705 flags.go:64] FLAG: --event-qps="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219273 4705 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219282 4705 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219293 4705 flags.go:64] FLAG: --eviction-hard="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219304 4705 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219314 4705 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219323 4705 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219333 4705 flags.go:64] FLAG: --eviction-soft="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219343 4705 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219353 4705 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219363 4705 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219399 4705 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219408 4705 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219418 4705 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219427 4705 flags.go:64] FLAG: --feature-gates="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219438 4705 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219447 4705 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219456 4705 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219465 4705 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219475 4705 flags.go:64] FLAG: --healthz-port="10248" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219484 4705 flags.go:64] FLAG: --help="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219493 4705 flags.go:64] FLAG: --hostname-override="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219502 4705 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219511 4705 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219520 4705 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219530 4705 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219538 4705 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219548 4705 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219556 4705 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219566 4705 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219575 4705 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219584 4705 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219594 4705 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219602 4705 flags.go:64] FLAG: --kube-reserved="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219612 4705 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219621 4705 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219630 4705 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219638 4705 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219647 4705 flags.go:64] FLAG: --lock-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219656 4705 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219665 4705 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219674 4705 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219688 4705 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219699 4705 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219708 4705 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219717 4705 flags.go:64] FLAG: --logging-format="text" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219726 4705 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219735 4705 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219744 4705 flags.go:64] FLAG: --manifest-url="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219752 4705 flags.go:64] FLAG: --manifest-url-header="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219764 4705 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219773 4705 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219784 4705 flags.go:64] FLAG: --max-pods="110" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219794 4705 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219803 4705 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219813 4705 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219822 4705 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219831 4705 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219840 4705 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219849 4705 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219869 4705 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219878 4705 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219887 4705 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219896 4705 flags.go:64] FLAG: --pod-cidr="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219906 4705 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219919 4705 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219928 4705 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219937 4705 flags.go:64] FLAG: --pods-per-core="0" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219946 4705 flags.go:64] FLAG: --port="10250" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219956 4705 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219965 4705 flags.go:64] FLAG: --provider-id="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219973 4705 flags.go:64] FLAG: --qos-reserved="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219982 4705 flags.go:64] FLAG: --read-only-port="10255" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.219991 4705 flags.go:64] FLAG: --register-node="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220000 4705 flags.go:64] FLAG: --register-schedulable="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220009 4705 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220024 4705 flags.go:64] FLAG: --registry-burst="10" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220033 4705 flags.go:64] FLAG: --registry-qps="5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220042 4705 flags.go:64] FLAG: --reserved-cpus="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220052 4705 flags.go:64] FLAG: --reserved-memory="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220063 4705 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220073 4705 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220082 4705 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220090 4705 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220117 4705 flags.go:64] FLAG: --runonce="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220126 4705 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220135 4705 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220145 4705 flags.go:64] FLAG: --seccomp-default="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220153 4705 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220162 4705 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220172 4705 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220182 4705 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220191 4705 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220200 4705 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220209 4705 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220218 4705 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220227 4705 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220236 4705 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220246 4705 flags.go:64] FLAG: --system-cgroups="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220254 4705 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220268 4705 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220277 4705 flags.go:64] FLAG: --tls-cert-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220286 4705 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220297 4705 flags.go:64] FLAG: --tls-min-version="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220306 4705 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220315 4705 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220334 4705 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220343 4705 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220353 4705 flags.go:64] FLAG: --v="2" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220364 4705 flags.go:64] FLAG: --version="false" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220398 4705 flags.go:64] FLAG: --vmodule="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220409 4705 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.220419 4705 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220639 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220650 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220661 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220671 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220680 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220691 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220701 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220712 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220720 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220729 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220738 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220747 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220755 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220763 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220771 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220780 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220788 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220797 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220805 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220813 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220821 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220828 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220836 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220844 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220851 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220871 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220879 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220887 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220895 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220902 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220910 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220917 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220925 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220932 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220940 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220948 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220956 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220964 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220973 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220981 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.220989 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221000 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221010 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221018 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221027 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221035 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221043 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221051 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221061 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221071 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221079 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221087 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221096 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221103 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221111 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221118 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221127 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221140 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221148 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221155 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221163 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221171 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221179 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221187 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221194 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221202 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221209 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221217 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221227 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221237 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.221244 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.221257 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230380 4705 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230401 4705 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230468 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230475 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230481 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230486 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230490 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230494 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230498 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230501 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230505 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230509 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230513 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230516 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230520 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230523 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230527 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230530 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230534 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230538 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230542 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230545 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230549 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230553 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230556 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230560 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230564 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230568 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230571 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230576 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230580 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230583 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230588 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230592 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230595 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230599 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230604 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230607 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230611 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230615 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230618 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230622 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230625 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230629 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230633 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230636 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230640 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230644 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230647 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230652 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230657 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230661 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230665 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230669 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230673 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230677 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230681 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230685 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230689 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230694 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230698 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230702 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230706 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230710 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230714 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230717 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230721 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230724 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230728 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230732 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230735 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230740 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230746 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.230751 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230877 4705 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230882 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230886 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230889 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230893 4705 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230897 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230901 4705 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230904 4705 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230908 4705 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230912 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230916 4705 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230919 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230923 4705 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230926 4705 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230930 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230934 4705 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230937 4705 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230940 4705 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230944 4705 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230948 4705 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230952 4705 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230957 4705 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230961 4705 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230965 4705 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230969 4705 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230973 4705 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230976 4705 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230980 4705 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230984 4705 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230988 4705 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230991 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230995 4705 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.230999 4705 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231002 4705 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231006 4705 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231010 4705 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231014 4705 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231018 4705 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231021 4705 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231025 4705 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231028 4705 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231032 4705 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231036 4705 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231039 4705 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231043 4705 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231046 4705 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231050 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231053 4705 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231057 4705 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231061 4705 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231064 4705 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231069 4705 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231074 4705 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231079 4705 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231084 4705 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231089 4705 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231093 4705 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231097 4705 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231101 4705 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231105 4705 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231109 4705 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231113 4705 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231117 4705 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231122 4705 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231126 4705 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231130 4705 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231133 4705 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231137 4705 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231141 4705 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231145 4705 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.231149 4705 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.231155 4705 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.231309 4705 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.235297 4705 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.235388 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237002 4705 server.go:997] "Starting client certificate rotation" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237023 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237267 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 14:08:53.361911321 +0000 UTC Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.237430 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.261181 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.264422 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.264429 4705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.281023 4705 log.go:25] "Validated CRI v1 runtime API" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.310749 4705 log.go:25] "Validated CRI v1 image API" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.312047 4705 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.318120 4705 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-14-48-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.318161 4705 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334250 4705 manager.go:217] Machine: {Timestamp:2026-02-16 14:53:26.330214515 +0000 UTC m=+0.515191611 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e0a92891-331c-4cfd-852e-c93d09da3492 BootID:c4ce382a-96e5-4027-9451-936b39edc61d Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:62:bb:f1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:62:bb:f1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:dc:57:22 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fc:3a:f4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ef:d2:79 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:28:52:05 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d5:90:d1:c5:6f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:62:24:5b:c6:5a:e0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334522 4705 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.334671 4705 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337047 4705 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337562 4705 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337621 4705 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337955 4705 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.337975 4705 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.338634 4705 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.338689 4705 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.339945 4705 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.340091 4705 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344723 4705 kubelet.go:418] "Attempting to sync node with API server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344760 4705 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344823 4705 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344853 4705 kubelet.go:324] "Adding apiserver pod source" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.344877 4705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.348509 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.348583 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.348698 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.348788 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.349794 4705 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.350767 4705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.354563 4705 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356590 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356631 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356645 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356658 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356749 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356766 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356781 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356803 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356818 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356832 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356851 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.356864 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.357767 4705 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.358588 4705 server.go:1280] "Started kubelet" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.361274 4705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.361957 4705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.362114 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.363364 4705 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364569 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364602 4705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.364632 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:45:41.504749566 +0000 UTC Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372594 4705 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372621 4705 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.372725 4705 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.372964 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.373698 4705 factory.go:55] Registering systemd factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.373872 4705 factory.go:221] Registration of the systemd container factory successfully Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.373877 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374463 4705 factory.go:153] Registering CRI-O factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374499 4705 factory.go:221] Registration of the crio container factory successfully Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.374461 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.374597 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374611 4705 server.go:460] "Adding debug handlers to kubelet server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374618 4705 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374846 4705 factory.go:103] Registering Raw factory Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.374879 4705 manager.go:1196] Started watching for new ooms in manager Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.373921 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1c53e217a88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,LastTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.376620 4705 manager.go:319] Starting recovery of all containers Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383847 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383945 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383972 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.383992 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384010 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384028 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384046 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384064 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384086 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384106 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384159 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384179 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384241 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384349 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384420 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384452 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384477 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384503 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384530 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384557 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384585 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384612 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384640 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384669 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384693 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384719 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384761 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384794 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384838 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384868 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384899 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384931 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384958 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.384984 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385012 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385039 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385067 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385094 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385122 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385155 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385189 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385218 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385247 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385275 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385351 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385410 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385440 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385467 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385493 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385520 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385548 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385577 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385624 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385685 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385717 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385748 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385775 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385801 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385830 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385856 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385883 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385909 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385936 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385964 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.385988 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386013 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386038 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386064 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386091 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386116 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386144 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386173 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386201 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386232 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386258 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386285 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386313 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386341 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386434 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386468 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386499 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386527 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386552 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386580 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386607 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386632 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386661 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386686 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386711 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386738 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386764 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386791 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386815 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386840 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386866 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386891 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386915 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386945 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386973 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.386999 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387024 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387048 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387077 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387113 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387142 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387171 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387199 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387226 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387252 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387280 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387306 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387333 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387358 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387450 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387480 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387507 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387538 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387563 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387592 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387621 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387646 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387672 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387697 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387724 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387752 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387779 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387804 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387831 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387861 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387887 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387914 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387942 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387967 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.387992 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388020 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388046 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388072 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388100 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388150 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388178 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388207 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388233 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388259 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388286 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388317 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388343 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388403 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388435 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388462 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388488 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388518 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388544 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388570 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388599 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388628 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388684 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388710 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388738 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388766 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388792 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388825 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388854 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.388883 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393419 4705 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393478 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393505 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393520 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393538 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393554 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393567 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393582 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393595 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393615 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393626 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393636 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393655 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393667 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393683 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393695 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393707 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393722 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393769 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393786 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393799 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393815 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393831 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393844 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393861 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393874 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393890 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393906 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393919 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393936 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393949 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393962 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393976 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.393991 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394010 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394026 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394042 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394059 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394074 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394090 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394104 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394117 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394133 4705 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394143 4705 reconstruct.go:97] "Volume reconstruction finished" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.394150 4705 reconciler.go:26] "Reconciler: start to sync state" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.411401 4705 manager.go:324] Recovery completed Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.415046 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417255 4705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417490 4705 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.417678 4705 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.418244 4705 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.418309 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.418669 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.430855 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.432534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433252 4705 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433271 4705 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.433290 4705 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.447447 4705 policy_none.go:49] "None policy: Start" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.448391 4705 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.448428 4705 state_mem.go:35] "Initializing new in-memory state store" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.474347 4705 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508753 4705 manager.go:334] "Starting Device Plugin manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508814 4705 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.508831 4705 server.go:79] "Starting device plugin registration server" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509343 4705 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509379 4705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509720 4705 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509821 4705 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.509832 4705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.518498 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.518576 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.519964 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.520155 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.520214 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521322 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521614 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.521726 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522196 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.522317 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523144 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523662 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523845 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.523875 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.524094 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.524946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525726 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.525756 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.526476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.574028 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596430 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596447 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596466 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596531 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596597 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.596639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.610902 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.614645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.614934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.615075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.615226 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.616093 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697611 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697653 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697712 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697733 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.697774 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698263 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698347 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698411 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698429 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698331 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.698506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.817225 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.818978 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.819000 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.819404 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.878898 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.901289 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.908429 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.915879 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c WatchSource:0}: Error finding container 183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c: Status 404 returned error can't find the container with id 183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.926167 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3 WatchSource:0}: Error finding container fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3: Status 404 returned error can't find the container with id fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3 Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.927021 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc WatchSource:0}: Error finding container 84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc: Status 404 returned error can't find the container with id 84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.930869 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: I0216 14:53:26.935770 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:26 crc kubenswrapper[4705]: W0216 14:53:26.950507 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0 WatchSource:0}: Error finding container 5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0: Status 404 returned error can't find the container with id 5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0 Feb 16 14:53:26 crc kubenswrapper[4705]: E0216 14:53:26.975692 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.220447 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.222430 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.222868 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.363518 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.365519 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:39:54.309802767 +0000 UTC Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.396538 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.396620 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.411988 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.412065 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.423109 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"84ebda7619418d6e4917661b49236c48bc209ff7a28dc73c61ea21b8820032dc"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.426289 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fec88377e91280a3cc99a832dab3909f5a2ac7c6477dfd4cc906fe1c5a1335a3"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.427325 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"183cdc2260db85e735465adc4b2c9c9154b24894f6448d191dafbde01cf6767c"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.428280 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e8bc54e53026a9830f5200bf898bcf611f28fe6c0a596e6b7f5e117856b4af0"} Feb 16 14:53:27 crc kubenswrapper[4705]: I0216 14:53:27.429136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b306ad403b24de209f4328e7c904434e6a863cc98493518aabf86d03063c04d5"} Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.697212 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1c53e217a88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,LastTimestamp:2026-02-16 14:53:26.35854708 +0000 UTC m=+0.543524186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.731710 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.731813 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.776937 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Feb 16 14:53:27 crc kubenswrapper[4705]: W0216 14:53:27.949646 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:27 crc kubenswrapper[4705]: E0216 14:53:27.949740 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.023792 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.026301 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:28 crc kubenswrapper[4705]: E0216 14:53:28.027202 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.363959 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.365964 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:53:57.233856948 +0000 UTC Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435528 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.435910 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.436802 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.437113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: E0216 14:53:28.438478 4705 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.438726 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.438919 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.439336 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.439778 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.440671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.440869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.441361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.443914 4705 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.443999 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.444711 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.446559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.447986 4705 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a2d206ac5a36eaa4c99c4801a3e9a925a34a396ca196663bf0cf2fac451726d0" exitCode=0 Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.448141 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a2d206ac5a36eaa4c99c4801a3e9a925a34a396ca196663bf0cf2fac451726d0"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.448250 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449831 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.449849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453281 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9"} Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.453441 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4705]: I0216 14:53:28.454881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.103543 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.103639 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.362852 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.366942 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:31:08.564018077 +0000 UTC Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.377890 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.460785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5b947f41146cb72121d65dd9fbf450be2466414f7e51fcd4b73c8bc1f5d78979"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.460866 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.464249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469333 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469425 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469442 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.469454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473358 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb" exitCode=0 Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473500 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.473532 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.474891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.478269 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.478847 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479194 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479233 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.479254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08"} Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.480317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.508956 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.509061 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.627608 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4705]: I0216 14:53:29.628993 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.629342 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Feb 16 14:53:29 crc kubenswrapper[4705]: W0216 14:53:29.755554 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Feb 16 14:53:29 crc kubenswrapper[4705]: E0216 14:53:29.755662 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.367842 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:47:03.240809993 +0000 UTC Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.490091 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d"} Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.490175 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.492957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494243 4705 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf" exitCode=0 Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494355 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494425 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494468 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf"} Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.494519 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.495952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496009 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4705]: I0216 14:53:30.496603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.368876 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:06:33.786317613 +0000 UTC Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507734 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507783 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507816 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8"} Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.507824 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4705]: I0216 14:53:31.509825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.087490 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.369638 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:52:44.967787084 +0000 UTC Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b"} Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519307 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081"} Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519318 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519418 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.519440 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.521351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.821494 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.830285 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.831959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832025 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4705]: I0216 14:53:32.832059 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.370649 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:18:23.414796318 +0000 UTC Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.425838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.426278 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.428517 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.472892 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.511407 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523021 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523111 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523178 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.523253 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4705]: I0216 14:53:33.525684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.371109 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:17:14.930951777 +0000 UTC Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.577746 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.578040 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.579932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.579988 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.580016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.618850 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.619096 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4705]: I0216 14:53:34.621254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4705]: I0216 14:53:35.371313 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:52:01.537443827 +0000 UTC Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.243234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.243504 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.245714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.251939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.372063 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:54:24.027185085 +0000 UTC Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.426325 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.426524 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 14:53:36 crc kubenswrapper[4705]: E0216 14:53:36.524898 4705 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.532557 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.532773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.533881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.533935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.534017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.666273 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.666636 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4705]: I0216 14:53:36.668518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.372667 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:00:53.161449808 +0000 UTC Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.535939 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.537674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4705]: I0216 14:53:37.542931 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.373589 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:18:13.367939656 +0000 UTC Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.542015 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4705]: I0216 14:53:38.543432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4705]: I0216 14:53:39.374419 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:40:38.115672971 +0000 UTC Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.366426 4705 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.374776 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:14:31.640413432 +0000 UTC Feb 16 14:53:40 crc kubenswrapper[4705]: W0216 14:53:40.452825 4705 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.452943 4705 trace.go:236] Trace[1731117463]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:30.450) (total time: 10002ms): Feb 16 14:53:40 crc kubenswrapper[4705]: Trace[1731117463]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:53:40.452) Feb 16 14:53:40 crc kubenswrapper[4705]: Trace[1731117463]: [10.002480553s] [10.002480553s] END Feb 16 14:53:40 crc kubenswrapper[4705]: E0216 14:53:40.452980 4705 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.708444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.708794 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710404 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.710478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.851142 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.851247 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.865435 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:40 crc kubenswrapper[4705]: I0216 14:53:40.865502 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:41 crc kubenswrapper[4705]: I0216 14:53:41.375192 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:07:25.511782363 +0000 UTC Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.092991 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]log ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]etcd ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiextensions-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/crd-informer-synced ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/bootstrap-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-registration-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]autoregister-completion ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 14:53:42 crc kubenswrapper[4705]: livez check failed Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.093817 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:53:42 crc kubenswrapper[4705]: I0216 14:53:42.375553 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:15:46.507731422 +0000 UTC Feb 16 14:53:43 crc kubenswrapper[4705]: I0216 14:53:43.375701 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:54:49.204180321 +0000 UTC Feb 16 14:53:44 crc kubenswrapper[4705]: I0216 14:53:44.376700 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:00:53.653682345 +0000 UTC Feb 16 14:53:44 crc kubenswrapper[4705]: I0216 14:53:44.846102 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.376860 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:46:14.417143106 +0000 UTC Feb 16 14:53:45 crc kubenswrapper[4705]: E0216 14:53:45.867437 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 14:53:45 crc kubenswrapper[4705]: E0216 14:53:45.869112 4705 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870177 4705 trace.go:236] Trace[519180976]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:34.905) (total time: 10964ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[519180976]: ---"Objects listed" error: 10964ms (14:53:45.870) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[519180976]: [10.964476342s] [10.964476342s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870208 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870235 4705 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870279 4705 trace.go:236] Trace[1909566368]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:35.557) (total time: 10312ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[1909566368]: ---"Objects listed" error: 10312ms (14:53:45.870) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[1909566368]: [10.312639122s] [10.312639122s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.870299 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.873315 4705 trace.go:236] Trace[736709864]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:34.183) (total time: 11689ms): Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[736709864]: ---"Objects listed" error: 11689ms (14:53:45.873) Feb 16 14:53:45 crc kubenswrapper[4705]: Trace[736709864]: [11.689395657s] [11.689395657s] END Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.873351 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.874045 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.898547 4705 csr.go:261] certificate signing request csr-pvx5v is approved, waiting to be issued Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.907107 4705 csr.go:257] certificate signing request csr-pvx5v is issued Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911602 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57022->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911657 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57022->192.168.126.11:17697: read: connection reset by peer" Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911609 4705 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49578->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 14:53:45 crc kubenswrapper[4705]: I0216 14:53:45.911711 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49578->192.168.126.11:17697: read: connection reset by peer" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.237112 4705 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237401 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237431 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.237352 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.47:46132->38.102.83.47:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894c1c560339cfe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:53:26.930160894 +0000 UTC m=+1.115137970,LastTimestamp:2026-02-16 14:53:26.930160894 +0000 UTC m=+1.115137970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:46 crc kubenswrapper[4705]: W0216 14:53:46.237536 4705 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.355822 4705 apiserver.go:52] "Watching apiserver" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359326 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359556 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.359940 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360202 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360279 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360528 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360642 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360634 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360712 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.360735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.360852 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.364775 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.365235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.365448 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367677 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367759 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367807 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367861 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367931 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.367958 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.374039 4705 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.377699 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:18:54.185516276 +0000 UTC Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.401811 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.419008 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.427134 4705 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.427198 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.429240 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.438115 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.447671 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.460251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.472180 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474394 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474643 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474666 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474687 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474830 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474889 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474912 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.474977 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475000 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475021 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475129 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475165 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475198 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475218 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475237 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475267 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475282 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475314 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475346 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475362 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475424 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475441 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475479 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475495 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475539 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475606 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475744 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475787 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475803 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475818 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475849 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475882 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475900 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475887 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475925 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.475921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476024 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476051 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476059 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476169 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476196 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476224 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476249 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476280 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476364 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476463 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476456 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476530 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476580 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476618 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476653 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476680 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476694 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476704 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476778 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476797 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476806 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476788 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477015 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477020 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477076 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477206 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477280 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477311 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477458 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477618 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477749 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.477868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478209 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478248 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478279 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478527 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478693 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478799 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.478877 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479653 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479794 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.479860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480099 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480122 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.480965 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481091 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.476806 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481717 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481789 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481856 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.481968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482055 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482203 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482278 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482318 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482414 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482461 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482480 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482497 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482516 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482552 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482571 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482590 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482608 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482625 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482642 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482660 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482678 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482700 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482715 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482531 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.482853 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483463 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.483725 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484088 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484167 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484415 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.484975 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485423 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485463 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485474 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485496 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485517 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485544 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485557 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485594 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485613 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485630 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485648 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485682 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485686 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485696 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485764 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486072 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486148 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486256 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486815 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.485783 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.486994 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487012 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487050 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487064 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487088 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487312 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487500 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487562 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487622 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487712 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487742 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.487864 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488074 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488218 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488639 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.488867 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.490121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.490363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492041 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492261 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.492680 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493045 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493341 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493644 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493684 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493719 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493827 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493872 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493910 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.493992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494124 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494195 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494232 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494273 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494306 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494340 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494599 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494633 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494941 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.494995 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495037 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495075 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495093 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495144 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495228 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495269 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495394 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495409 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495447 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495533 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495574 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495612 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495789 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495831 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495870 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495936 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496011 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496051 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496089 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496286 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496428 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496476 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496694 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496747 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496791 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497565 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497584 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497600 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497618 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497634 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497649 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497666 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497681 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497696 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497712 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497727 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497744 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497760 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497775 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497788 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497800 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497813 4705 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497831 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497846 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497859 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497873 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497888 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497901 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497917 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497930 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497943 4705 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497956 4705 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497970 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497983 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497996 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498009 4705 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498026 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498039 4705 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498054 4705 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498068 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498082 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498096 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498109 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498122 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498135 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498148 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498160 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498174 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498187 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498200 4705 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498213 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498227 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498240 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498620 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498637 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498650 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498664 4705 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498676 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498694 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498706 4705 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498720 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498733 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498746 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498763 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498776 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498789 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498805 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498818 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498832 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498845 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498859 4705 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498871 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498884 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498896 4705 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498910 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498924 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498937 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498949 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498962 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498976 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.498989 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499002 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499017 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499031 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499048 4705 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499070 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499095 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499113 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499130 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499147 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499166 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499182 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499200 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499216 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499238 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499256 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499274 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499289 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499303 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499318 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499331 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499344 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499359 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499397 4705 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.501616 4705 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.502923 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495471 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495734 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.495990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496953 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497016 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.497244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.496996 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.499898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.499980 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.500697 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.000669789 +0000 UTC m=+21.185646865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.503132 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.503462 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.506886 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.504604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.504806 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.504573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.507001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.507085 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.007048036 +0000 UTC m=+21.192025122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.507163 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.007136469 +0000 UTC m=+21.192113555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.507647 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.510828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511117 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511140 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511156 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.511189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.511243 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.011218292 +0000 UTC m=+21.196195378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512709 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512736 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512750 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: E0216 14:53:46.512841 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:47.012823506 +0000 UTC m=+21.197800812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.513985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515052 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515229 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.515574 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516141 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516288 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516455 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516784 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516852 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.516971 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517234 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517287 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517431 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517517 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.517715 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518726 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.518844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.519056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.519574 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.523513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.524198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.524319 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.525964 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526447 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526619 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526847 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.526987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527192 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527287 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527573 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.527733 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528179 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.528582 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.529047 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.529863 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.530492 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.530948 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.531180 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.531792 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.532835 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.532844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.534499 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.534921 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535382 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.535962 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536208 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536628 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536732 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536762 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.536968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537422 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.537990 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538326 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538421 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538544 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.538831 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539148 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539174 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539249 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.539276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.541238 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.541188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.544397 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.546133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.552360 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.554743 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.560501 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.566532 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.567911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.568141 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" exitCode=255 Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.568185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d"} Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.578859 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.589689 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.593272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600274 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600741 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600752 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600761 4705 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600771 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600780 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600790 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600800 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600810 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600821 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600833 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600876 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600843 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600922 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600946 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600964 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.600989 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601003 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601018 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601033 4705 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601047 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601060 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601074 4705 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601088 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601118 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601132 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601146 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601161 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601175 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601193 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601207 4705 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601222 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601236 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601254 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601269 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601285 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601298 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601318 4705 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601334 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601350 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601404 4705 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601421 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601435 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601449 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601463 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601476 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601490 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601505 4705 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601518 4705 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601531 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601545 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601559 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601572 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601586 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601599 4705 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601612 4705 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601624 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601638 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601653 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601670 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601687 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601700 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601714 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601729 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601743 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601756 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601769 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601782 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601795 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601825 4705 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601842 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601857 4705 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601872 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601887 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601902 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601917 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601932 4705 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601948 4705 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601962 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601977 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.601992 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602007 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602022 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602036 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602051 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602066 4705 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602080 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602096 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602113 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602128 4705 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602145 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602160 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602177 4705 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602192 4705 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602214 4705 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.602230 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.607184 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.608660 4705 scope.go:117] "RemoveContainer" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.627686 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.648013 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.659244 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.675128 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.685209 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.687964 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.695133 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.701109 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.703867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.716078 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.728757 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.908224 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 14:48:45 +0000 UTC, rotation deadline is 2026-12-22 12:34:35.477626854 +0000 UTC Feb 16 14:53:46 crc kubenswrapper[4705]: I0216 14:53:46.908273 4705 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7413h40m48.569355685s for next certificate rotation Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.005607 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.005811 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.005777402 +0000 UTC m=+22.190754678 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.093251 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106319 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106617 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106640 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.106650 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106717 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106683159 +0000 UTC m=+22.291660235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106809 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106815 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106859 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106872 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106924 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106909485 +0000 UTC m=+22.291886561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106806 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106963 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106957337 +0000 UTC m=+22.291934413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106831 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.106978 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.107002 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:48.106996758 +0000 UTC m=+22.291973834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.120960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.132797 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.146393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.183281 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.211204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.235920 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.378653 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:41:45.847949411 +0000 UTC Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.419321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:47 crc kubenswrapper[4705]: E0216 14:53:47.419477 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.571866 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.571912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"13ba08b7aaa7aa92e52ddd42a7da43c1bb3f0bb40d70492599afb29d0b335469"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.574492 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.576044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.576972 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.583721 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"735f0d146eef10de2a44400745b87e04a1f33bf2d095ec441be4a9c3c9c89be2"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.584415 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.587943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.587991 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.588003 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4a26a9b10f6414261afe596837cbbf3b60cf6df49b031411d434d212e832bfee"} Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.594563 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.616028 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.629206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.642302 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.654112 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.666814 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.680349 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.699021 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.710255 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.715765 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2ljf7"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.716131 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.717690 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718421 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bflhj"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718573 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fnnf4"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.718694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rwkxz"] Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719759 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719935 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.719959 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722198 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722275 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.722784 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.723492 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.725257 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.725753 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726121 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726340 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726414 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726473 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726393 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726486 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726541 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726653 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726723 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726744 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.726758 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.727079 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.744464 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.756479 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.769992 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.785948 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.809057 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814406 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814468 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814504 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814572 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814640 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814668 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814748 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814838 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.814984 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815003 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815040 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815059 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815114 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815131 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815163 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815286 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815326 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815348 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815390 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815430 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815459 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815479 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815520 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815562 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.815708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.830013 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.842937 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.855133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.873463 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.891118 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916493 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916573 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916588 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916602 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916660 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916754 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916770 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916807 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916825 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916978 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.916992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917006 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917020 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917035 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917177 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917221 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cnibin\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-multus\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917748 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-k8s-cni-cncf-io\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-socket-dir-parent\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.917981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-etc-kubernetes\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-rootfs\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.918734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919132 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919379 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-cni-bin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-var-lib-kubelet\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-system-cni-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-multus-conf-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919583 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/55f9230c-7ded-46f1-babb-eba339b0ca6c-hosts-file\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.919716 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-system-cni-dir\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920251 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-multus-daemon-config\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920512 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ec06562-0237-4709-9469-033783d7d545-cni-binary-copy\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-hostroot\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920730 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920747 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920779 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.920912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-os-release\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-netns\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921103 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-os-release\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-cnibin\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921222 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06562-0237-4709-9469-033783d7d545-host-run-multus-certs\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921824 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.921832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-mcd-auth-proxy-config\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.929883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.936495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"ovnkube-node-tshhr\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.943861 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-proxy-tls\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.944078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gbv\" (UniqueName: \"kubernetes.io/projected/48761d4f-98a4-435f-ae5e-6cdb58dbc4a4-kube-api-access-h9gbv\") pod \"multus-additional-cni-plugins-rwkxz\" (UID: \"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\") " pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.950851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhrp\" (UniqueName: \"kubernetes.io/projected/0ec06562-0237-4709-9469-033783d7d545-kube-api-access-6vhrp\") pod \"multus-2ljf7\" (UID: \"0ec06562-0237-4709-9469-033783d7d545\") " pod="openshift-multus/multus-2ljf7" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.953920 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.957040 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7v9\" (UniqueName: \"kubernetes.io/projected/6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c-kube-api-access-zm7v9\") pod \"machine-config-daemon-fnnf4\" (UID: \"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\") " pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.960558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgkl\" (UniqueName: \"kubernetes.io/projected/55f9230c-7ded-46f1-babb-eba339b0ca6c-kube-api-access-hdgkl\") pod \"node-resolver-bflhj\" (UID: \"55f9230c-7ded-46f1-babb-eba339b0ca6c\") " pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.982725 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:47 crc kubenswrapper[4705]: I0216 14:53:47.996330 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:47Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.008142 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.017536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.017611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.017770 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.017751275 +0000 UTC m=+24.202728341 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.027409 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.031709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2ljf7" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.042474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.052909 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bflhj" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.058301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.065812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:53:48 crc kubenswrapper[4705]: W0216 14:53:48.093074 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f92e3ed_2ba8_4202_a1b8_7350fadc1d8c.slice/crio-a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793 WatchSource:0}: Error finding container a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793: Status 404 returned error can't find the container with id a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793 Feb 16 14:53:48 crc kubenswrapper[4705]: W0216 14:53:48.093829 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55f9230c_7ded_46f1_babb_eba339b0ca6c.slice/crio-ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19 WatchSource:0}: Error finding container ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19: Status 404 returned error can't find the container with id ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.118316 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118424 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118436 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118457 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118468 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118425 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118511 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118498637 +0000 UTC m=+24.303475713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118506 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118593 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118570069 +0000 UTC m=+24.303547205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118603 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118612 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118641 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.11860279 +0000 UTC m=+24.303579966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.118663 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:50.118654122 +0000 UTC m=+24.303631298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.379729 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:10:29.574826764 +0000 UTC Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.419240 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.419331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.419464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:48 crc kubenswrapper[4705]: E0216 14:53:48.419518 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.424115 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.425534 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.426977 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.427808 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.429227 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.429931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.430732 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.432526 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.433285 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.434476 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.435090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.436570 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.437199 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.437940 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.439173 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.440590 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.441940 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.442539 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.444401 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.445237 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.445931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.448464 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.449031 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.450473 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.450932 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.452151 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.452894 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.453795 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.454447 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.454990 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.456166 4705 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.456313 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.458189 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.459278 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.459796 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.461915 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.463235 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.463942 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.465469 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.466479 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.467090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.468314 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.473764 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.474481 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.474975 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.475538 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.476214 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.476962 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.477485 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.477981 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.478469 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.479012 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.479594 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.480102 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.592302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.592738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"bebcfc949c7b1affe236f7ab803679c4e2f0ba3699014c926fd5504ebfd97dac"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594037 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.594049 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"a573e5e7cc9dbf843ce05aa2564758b9ccddceb40e7e40255c47326921a8a793"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595645 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e" exitCode=0 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595704 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.595718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerStarted","Data":"347468729d5581dc8fbc6dfd3995d34234764644b295ce5318e33b2927ac1908"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.597268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bflhj" event={"ID":"55f9230c-7ded-46f1-babb-eba339b0ca6c","Type":"ContainerStarted","Data":"fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.597413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bflhj" event={"ID":"55f9230c-7ded-46f1-babb-eba339b0ca6c","Type":"ContainerStarted","Data":"ae9e56e8b2694167edf66709880746dc78fa6d099f03e1e4ff35406bc4a68d19"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598406 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" exitCode=0 Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.598530 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"42045b84aca42a832078848d2b0993c882266e872a0d71d75f9c0c7f12bd5a14"} Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.609643 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.622544 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.641788 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.655770 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.671207 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.685153 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.699053 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.713345 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.731051 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.750570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.771222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.786233 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.798001 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.812299 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.837688 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.851908 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.885857 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.930573 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.961547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.982450 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:48 crc kubenswrapper[4705]: I0216 14:53:48.993311 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:48Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.007805 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.020230 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.037045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.380567 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:06:22.399383689 +0000 UTC Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.419068 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:49 crc kubenswrapper[4705]: E0216 14:53:49.419203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.602791 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7" exitCode=0 Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.602862 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605902 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605944 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605952 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.605960 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.606975 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28"} Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.622984 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.649018 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.685023 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.700666 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.713009 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.731204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.755539 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.771524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.784836 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.803954 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.819242 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.830883 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.844157 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.858928 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.871785 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.883512 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.909241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.923169 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.940497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.960292 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.977834 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:49 crc kubenswrapper[4705]: I0216 14:53:49.997343 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:49Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.017129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.034034 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.041260 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.041514 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.041493436 +0000 UTC m=+28.226470512 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142587 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.142605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142734 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142750 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142744 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142791 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142863 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142889 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142906 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142761 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142873 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.142827965 +0000 UTC m=+28.327805101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142977 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.142951989 +0000 UTC m=+28.327929065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.142989 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.14298323 +0000 UTC m=+28.327960296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.143000 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:54.14299544 +0000 UTC m=+28.327972516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.372937 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f7zct"] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.373266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.375514 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376280 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376532 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.376793 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.380879 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:26:21.646756686 +0000 UTC Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.393065 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.407102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.419264 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.419355 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.419422 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:50 crc kubenswrapper[4705]: E0216 14:53:50.419552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.422270 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.438903 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444503 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444572 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.444598 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.455768 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.468959 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.496701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.514451 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.528354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545833 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.545954 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e35c89f5-2045-4451-b301-44615b5f73e6-host\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.546820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e35c89f5-2045-4451-b301-44615b5f73e6-serviceca\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.548897 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.565817 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.573148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rvf\" (UniqueName: \"kubernetes.io/projected/e35c89f5-2045-4451-b301-44615b5f73e6-kube-api-access-s5rvf\") pod \"node-ca-f7zct\" (UID: \"e35c89f5-2045-4451-b301-44615b5f73e6\") " pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.590117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.610869 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.612757 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b" exitCode=0 Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.612877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b"} Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.642407 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.667315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.679295 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.690085 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f7zct" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.696075 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.713113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.731065 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.739395 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.755102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.755394 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.759175 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.771737 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.784555 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.797022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.808631 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.826640 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.838550 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.852987 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.864108 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.875518 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.888914 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.901846 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.918925 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.931211 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.946855 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.959467 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.975486 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4705]: I0216 14:53:50.987476 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.009036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.034524 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.066817 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.381458 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:18:05.640740839 +0000 UTC Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.418577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:51 crc kubenswrapper[4705]: E0216 14:53:51.418706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.621621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.623825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f7zct" event={"ID":"e35c89f5-2045-4451-b301-44615b5f73e6","Type":"ContainerStarted","Data":"d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.623851 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f7zct" event={"ID":"e35c89f5-2045-4451-b301-44615b5f73e6","Type":"ContainerStarted","Data":"ccdf87c848f97940099a55a97f506c8acd18cd36e08a6f4487c5e1d6d910b067"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.627425 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f" exitCode=0 Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.627490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f"} Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.643701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.665809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.690866 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.716724 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.730429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.744505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.755815 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.771004 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.785457 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.800228 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.811042 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.823480 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.835650 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.852036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.865676 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.880216 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.895960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.917886 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.945003 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.961963 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.974743 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4705]: I0216 14:53:51.994410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.008675 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.026850 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.065749 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.108111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.147701 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.188868 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.270131 4705 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271860 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.271951 4705 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.278299 4705 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.278594 4705 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279491 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.279530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.303380 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.308076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.322676 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.332149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.345262 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.349999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.350015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.350025 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.361882 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.365780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.377928 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.378234 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.380231 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.381948 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:13:31.885527744 +0000 UTC Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.421620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.422002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.422212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:52 crc kubenswrapper[4705]: E0216 14:53:52.422710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.483717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.587400 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.637871 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2" exitCode=0 Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.637950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.654129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.671982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.691505 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.707826 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.728462 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.750104 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.765399 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.785143 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.794265 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.803603 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.818814 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.843565 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.864686 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897128 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.897933 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.926884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:52 crc kubenswrapper[4705]: I0216 14:53:52.945129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:52Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.001177 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.104465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.207411 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310187 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.310288 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.382213 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:01:08.368906964 +0000 UTC Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.413676 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.418777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:53 crc kubenswrapper[4705]: E0216 14:53:53.418940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.429830 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.433396 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.440109 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.444201 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.459562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.480836 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.495361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.509981 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.516099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.533532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.559851 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.575499 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.592924 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.607759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.619543 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.620785 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.639923 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.646245 4705 generic.go:334] "Generic (PLEG): container finished" podID="48761d4f-98a4-435f-ae5e-6cdb58dbc4a4" containerID="202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b" exitCode=0 Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.647410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerDied","Data":"202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.662357 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.682877 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.700170 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.716982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.722259 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.737600 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.773071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.790203 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.807669 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.814315 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.822474 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824640 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.824683 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.839159 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.857206 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.874317 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.898071 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.914949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932188 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.932203 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.933413 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.960326 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:53 crc kubenswrapper[4705]: I0216 14:53:53.994924 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:53Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.034967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.035038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.035097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.088508 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.088745 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.088707135 +0000 UTC m=+36.273684211 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.138697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189845 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189917 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189952 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.189975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190130 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190195 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190215 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190230 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190138 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190297 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190314 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190264 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190235788 +0000 UTC m=+36.375212904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190408 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190357971 +0000 UTC m=+36.375335057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190446 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190433923 +0000 UTC m=+36.375411009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190714 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.190826 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.190799223 +0000 UTC m=+36.375776299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.241500 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.344961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.383884 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:37:24.596521098 +0000 UTC Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.418639 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.418818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.418952 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:54 crc kubenswrapper[4705]: E0216 14:53:54.419235 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448668 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.448726 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550750 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.550780 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.653757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.655654 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" event={"ID":"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4","Type":"ContainerStarted","Data":"a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.661180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.662234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.662292 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.682437 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.740024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.740919 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.743507 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.756873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.766664 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.786656 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.811869 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.833606 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.856707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.860457 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.883428 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.920095 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.936288 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.948558 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.959660 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.964237 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4705]: I0216 14:53:54.982073 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.002780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.025100 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.048444 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.068159 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.070016 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.090621 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.106179 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.133203 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.159292 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171846 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.171861 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.178932 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.207313 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.222809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.249625 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.265505 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.274995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.275015 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.281283 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.303204 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.315694 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.339549 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.353831 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.378111 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.384258 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:24:26.771660037 +0000 UTC Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.419204 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:55 crc kubenswrapper[4705]: E0216 14:53:55.419645 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.481162 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.585172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.664777 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.689422 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792693 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.792843 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896276 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896305 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4705]: I0216 14:53:55.896326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.000247 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103566 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.103628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.206695 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.309770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.310489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.384561 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:47:17.551910855 +0000 UTC Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.410659 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.414813 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.418627 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.418636 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:56 crc kubenswrapper[4705]: E0216 14:53:56.418886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:56 crc kubenswrapper[4705]: E0216 14:53:56.419023 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.434441 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.449042 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.471535 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.493040 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.519795 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.540052 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.571288 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.598333 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.618604 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623158 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.623208 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.633707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.654708 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.670945 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.673107 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.692942 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.707015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.721460 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725566 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.725651 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.740106 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.828797 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4705]: I0216 14:53:56.931852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034843 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.034906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.137916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.137997 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.138067 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241696 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.241717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.345519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.385923 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:46:23.639112761 +0000 UTC Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.418818 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:57 crc kubenswrapper[4705]: E0216 14:53:57.419071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.449521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553282 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553307 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.553356 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.657796 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.677905 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.682504 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" exitCode=1 Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.682584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.683909 4705 scope.go:117] "RemoveContainer" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.706141 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.723339 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.751040 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.761930 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.762068 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.788609 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.813699 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.837326 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.866518 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.874462 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.899762 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.918675 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.942630 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.960354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.970489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.979274 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:57 crc kubenswrapper[4705]: I0216 14:53:57.998034 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:57Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.020088 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.043780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.072815 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.176243 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.279440 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.382304 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.386281 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:54:55.078840589 +0000 UTC Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.418693 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.418854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:58 crc kubenswrapper[4705]: E0216 14:53:58.418978 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:58 crc kubenswrapper[4705]: E0216 14:53:58.419155 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.485796 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.589216 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691594 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691888 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.691935 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.697693 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.697919 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.736426 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.760957 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.781340 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.795816 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.807284 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.824855 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.842117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.864022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.877323 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.899252 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.900361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.921497 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.945558 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.965509 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:58 crc kubenswrapper[4705]: I0216 14:53:58.991560 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:58Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.003233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.013246 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.034335 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.106229 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.209757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313215 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313312 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313421 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.313455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.387424 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:40:26.386103975 +0000 UTC Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417525 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.417671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.418602 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:59 crc kubenswrapper[4705]: E0216 14:53:59.418822 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.522233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.626456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.705034 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.706336 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/0.log" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710731 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" exitCode=1 Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.710930 4705 scope.go:117] "RemoveContainer" containerID="e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.711762 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:53:59 crc kubenswrapper[4705]: E0216 14:53:59.712027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730328 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.730349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.737786 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.773864 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.797351 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.832977 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.833729 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.852889 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.872232 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.890066 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.902965 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.917693 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.931621 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.935820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.958450 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.976577 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:59 crc kubenswrapper[4705]: I0216 14:53:59.998304 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:59Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.018331 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038901 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.038931 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.040187 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.142441 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.245993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.246003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.349216 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.388237 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:12:47.845996548 +0000 UTC Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.418693 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.418783 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:00 crc kubenswrapper[4705]: E0216 14:54:00.418909 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:00 crc kubenswrapper[4705]: E0216 14:54:00.419076 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.441359 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66"] Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.442296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.445266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.446781 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.456431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.465291 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.491225 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.510411 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.530773 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.551126 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559799 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.559891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.572980 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.574273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.608283 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.632048 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.654056 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.663356 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.675885 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676017 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.676125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.677702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.677728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33860ee2-697c-4950-af95-26d7916c0a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.686725 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33860ee2-697c-4950-af95-26d7916c0a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.695809 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.708130 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvxb\" (UniqueName: \"kubernetes.io/projected/33860ee2-697c-4950-af95-26d7916c0a4f-kube-api-access-txvxb\") pod \"ovnkube-control-plane-749d76644c-7lk66\" (UID: \"33860ee2-697c-4950-af95-26d7916c0a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.721269 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.728397 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.751636 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.765428 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.766970 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.767067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.767163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.780261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: W0216 14:54:00.792449 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33860ee2_697c_4950_af95_26d7916c0a4f.slice/crio-46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b WatchSource:0}: Error finding container 46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b: Status 404 returned error can't find the container with id 46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.801311 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.825593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.844015 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:00Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872096 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.872140 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4705]: I0216 14:54:00.976734 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.081574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.082123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185609 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.185646 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.388802 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:15:57.317519687 +0000 UTC Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.419338 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.419524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.451699 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.556282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.620727 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.622694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.622829 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.644745 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.660352 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.670126 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.688186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.688357 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.690936 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.708547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.723841 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.737831 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.738161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.738308 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" event={"ID":"33860ee2-697c-4950-af95-26d7916c0a4f","Type":"ContainerStarted","Data":"46bd97614f40b4d789a0bba86d378a4f233639445eef5c0bb7968b609fad9e5b"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.748551 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.765934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.766111 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.772951 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.790023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.790157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.790339 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4705]: E0216 14:54:01.790482 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:02.290450474 +0000 UTC m=+36.475427550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.798465 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.816105 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdqv\" (UniqueName: \"kubernetes.io/projected/67dea3c6-e6a4-4078-9bf2-6928c39f498b-kube-api-access-6fdqv\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.825410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.846668 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.857533 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.870984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.871002 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.875536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.888845 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.909614 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.924716 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.939354 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.956394 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.974837 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4705]: I0216 14:54:01.975122 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.002010 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:01Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.037737 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.057757 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.074424 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.079274 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.090168 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.093541 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.093770 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.093730136 +0000 UTC m=+52.278707222 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.105097 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.122256 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.137939 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.151953 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.167680 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.181731 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.185852 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195220 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195297 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.195414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195513 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195522 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195579 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195604 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195625 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195650 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195618839 +0000 UTC m=+52.380595955 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195540 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195691 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.19566665 +0000 UTC m=+52.380643766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195700 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195718 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195705641 +0000 UTC m=+52.380682757 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195719 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.195802 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.195777033 +0000 UTC m=+52.380754329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.206554 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.219909 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.237545 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.253087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.277636 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.285473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.297074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.297299 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.297460 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:03.29743004 +0000 UTC m=+37.482407146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.389042 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:22:24.988244472 +0000 UTC Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.390957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.391873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.418872 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.418973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.419126 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.419221 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.495943 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.599697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702438 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.702458 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.766960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767043 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.767054 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.784798 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.789526 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.811201 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.817608 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.836487 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.841530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.877653 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.883679 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.913930 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4705]: E0216 14:54:02.914116 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4705]: I0216 14:54:02.916483 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.019443 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.123201 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.226872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.310218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.310548 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.310677 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:05.310646734 +0000 UTC m=+39.495623850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.329939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.330074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.390157 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:03:27.74052768 +0000 UTC Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.418883 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.418912 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.419101 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:03 crc kubenswrapper[4705]: E0216 14:54:03.419257 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433446 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.433545 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.536501 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.639897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.639976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.640056 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.743983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.744007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.744024 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.848480 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952191 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4705]: I0216 14:54:03.952278 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055857 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.055911 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159619 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.159781 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.264697 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368117 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.368268 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.390674 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:01:42.851302112 +0000 UTC Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.419186 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.419244 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:04 crc kubenswrapper[4705]: E0216 14:54:04.419447 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:04 crc kubenswrapper[4705]: E0216 14:54:04.419663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.471728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.575458 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.624317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.647781 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.669308 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.678199 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.696794 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.720328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.744400 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.778061 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e783e9898399b400c65f7cd19ff9e34229cdb080d5ddae82feab0f9e97b1863a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:56Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI0216 14:53:56.892610 6046 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:56.892623 6046 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:56.892658 6046 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:56.892654 6046 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:56.892666 6046 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:56.892683 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:56.892685 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:56.892699 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:56.892710 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:56.892721 6046 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:56.892771 6046 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:56.892826 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:56.892851 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:56.892921 6046 factory.go:656] Stopping watch factory\\\\nI0216 14:53:56.892956 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:53:56.892995 6046 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.781580 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.814022 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.834449 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.856102 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.871996 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.884277 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.887964 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.905996 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.924707 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.946940 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.965593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987915 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.987991 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4705]: I0216 14:54:04.989440 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.008154 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.091823 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.195297 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.298363 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.336946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.337196 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.337309 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:09.337280479 +0000 UTC m=+43.522257585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.391363 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:36:13.613953876 +0000 UTC Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.402876 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.419228 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.419265 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.419568 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.419805 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.506943 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.507087 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.610871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.611074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714754 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.714772 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.745580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.747252 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:05 crc kubenswrapper[4705]: E0216 14:54:05.747632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.769948 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.790067 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.806296 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.818941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.819936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.827136 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.852863 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.875799 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.910917 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.923421 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.946173 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.971043 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4705]: I0216 14:54:05.992877 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.009867 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.027870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.027980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.028076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.032241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.052614 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.076117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.095106 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.111875 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.129652 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132464 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.132519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.236385 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.339991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.340181 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.392440 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:09:12.296361742 +0000 UTC Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.418990 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.419020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:06 crc kubenswrapper[4705]: E0216 14:54:06.419153 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:06 crc kubenswrapper[4705]: E0216 14:54:06.419405 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.443306 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.445264 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.464414 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.485780 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.507313 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.540455 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.545980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.546024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.546100 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.564050 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.598105 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.620315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.641466 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.650154 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.676306 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.698613 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.719992 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.737249 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753810 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.753850 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.756978 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.777056 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.796024 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.816891 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:06Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.856479 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4705]: I0216 14:54:06.959951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.063695 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.167720 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.271761 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.374532 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.392906 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:33:51.89743431 +0000 UTC Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.418798 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.418866 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:07 crc kubenswrapper[4705]: E0216 14:54:07.419064 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:07 crc kubenswrapper[4705]: E0216 14:54:07.419264 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.477672 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.581582 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.685471 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.789655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893627 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.893681 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4705]: I0216 14:54:07.997724 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.120190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.223431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.326984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.327003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.393149 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:21:16.388569613 +0000 UTC Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.418992 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.419048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:08 crc kubenswrapper[4705]: E0216 14:54:08.419212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:08 crc kubenswrapper[4705]: E0216 14:54:08.420052 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430099 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.430224 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534076 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534174 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.534251 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638230 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.638255 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741852 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.741873 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.845729 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948492 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4705]: I0216 14:54:08.948548 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.051659 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154700 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154728 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.154748 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257862 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257949 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.257990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.258015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.258033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.361296 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.389129 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.389344 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.389456 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:17.389432874 +0000 UTC m=+51.574409960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.394004 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:52:01.485510686 +0000 UTC Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.418322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.418425 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.418532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:09 crc kubenswrapper[4705]: E0216 14:54:09.418684 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.464217 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568179 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.568245 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.672333 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.775960 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.776123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879664 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.879722 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983399 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983462 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4705]: I0216 14:54:09.983501 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086747 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086804 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086825 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.086879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190649 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190673 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.190690 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.294998 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.394795 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:22:43.873095223 +0000 UTC Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398669 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.398856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.419407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:10 crc kubenswrapper[4705]: E0216 14:54:10.419610 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.419717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:10 crc kubenswrapper[4705]: E0216 14:54:10.419864 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.502423 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.605445 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709337 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.709441 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.812567 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.916976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4705]: I0216 14:54:10.917152 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021466 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.021521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124704 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.124725 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227724 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.227744 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.331845 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332098 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.332119 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.396023 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:01:47.165547959 +0000 UTC Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.418819 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.418937 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:11 crc kubenswrapper[4705]: E0216 14:54:11.419168 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:11 crc kubenswrapper[4705]: E0216 14:54:11.419335 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436314 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.436415 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.539341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642274 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.642628 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747678 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747698 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.747757 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.852798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.852998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.853083 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957348 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4705]: I0216 14:54:11.957365 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061445 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061529 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.061590 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.165473 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.268769 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373359 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373577 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373634 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.373665 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.396496 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:22:41.37038067 +0000 UTC Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.419163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:12 crc kubenswrapper[4705]: E0216 14:54:12.419481 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.419189 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:12 crc kubenswrapper[4705]: E0216 14:54:12.420142 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.478635 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582596 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582670 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582720 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.582740 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686480 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.686544 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.790265 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.893848 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997820 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4705]: I0216 14:54:12.997841 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.100991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.101118 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204280 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.204350 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.232292 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.253881 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259406 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.259503 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.280728 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.286190 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.306031 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310918 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.310965 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.331176 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.336969 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.359325 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.359500 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362246 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.362266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.396728 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:08:34.308942868 +0000 UTC Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.419149 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.419257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.419358 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:13 crc kubenswrapper[4705]: E0216 14:54:13.419504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466424 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466544 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.466562 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.569613 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.673519 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776272 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.776305 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.878827 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.981964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4705]: I0216 14:54:13.982062 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.084966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.085120 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188156 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.188204 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.291462 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394066 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.394094 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.397416 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:13:28.274190396 +0000 UTC Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.418870 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:14 crc kubenswrapper[4705]: E0216 14:54:14.419031 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.419288 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:14 crc kubenswrapper[4705]: E0216 14:54:14.419445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.496658 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.599855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.600755 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.704153 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.807416 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4705]: I0216 14:54:14.912355 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015883 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.015926 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.119378 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222936 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.222957 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.326727 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.398173 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:22:08.390671936 +0000 UTC Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.418757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:15 crc kubenswrapper[4705]: E0216 14:54:15.418901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.418763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:15 crc kubenswrapper[4705]: E0216 14:54:15.419079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.429357 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532326 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.532353 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.635689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636240 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.636820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.739851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740496 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.740578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844778 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.844832 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948092 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4705]: I0216 14:54:15.948229 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051269 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051499 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.051520 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.153793 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.258491 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361642 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.361721 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.398320 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:40:46.316014484 +0000 UTC Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.418817 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.418921 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:16 crc kubenswrapper[4705]: E0216 14:54:16.419025 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:16 crc kubenswrapper[4705]: E0216 14:54:16.419169 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.442272 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.464830 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.490698 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.515673 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.549886 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567904 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.567965 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.575547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.602835 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.618572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.637120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.649312 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.662876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.669959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.670052 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.671244 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.680559 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.683516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.700767 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.724235 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.743532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.765882 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.773205 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.787440 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.805045 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.829969 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.848282 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.862035 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.875993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.876121 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.879087 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.894678 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.910620 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.931346 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.951468 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.969222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979133 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.979295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4705]: I0216 14:54:16.987884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.009943 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.033429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.051054 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.076043 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.080984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.081057 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.093537 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.108430 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.129502 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183775 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183837 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183859 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.183874 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286183 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.286233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.388899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.388982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.389051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.399415 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:27:56.501943159 +0000 UTC Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.418971 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.419026 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.419201 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.419366 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.420034 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.482992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.483198 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:17 crc kubenswrapper[4705]: E0216 14:54:17.483300 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:33.48327381 +0000 UTC m=+67.668250916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492472 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.492502 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.595982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.596136 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.698521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800948 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.800974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.801007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.801030 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.806697 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.811007 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.811700 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.829113 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.840687 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.850638 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.862842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.876903 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.889775 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.903833 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.914222 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.945589 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.964526 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.983167 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:17 crc kubenswrapper[4705]: I0216 14:54:17.999029 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006274 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006286 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.006315 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.017117 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.031615 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.045463 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.068063 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.083970 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.098305 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109290 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.109306 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.153982 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.192184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.192328 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.192301826 +0000 UTC m=+84.377278902 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.212081 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.293989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294099 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294096 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294143 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294192 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294174259 +0000 UTC m=+84.479151335 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294232 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.29421106 +0000 UTC m=+84.479188146 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294154 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294255 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294268 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294300 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294291252 +0000 UTC m=+84.479268328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294113 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294325 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.294348 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:50.294342394 +0000 UTC m=+84.479319460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.314249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.399645 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:35:55.921308629 +0000 UTC Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.416139 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.418579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.418636 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.418693 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.418915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.519406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.621953 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.622044 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.726268 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.817834 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.819011 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/1.log" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823880 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" exitCode=1 Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.823999 4705 scope.go:117] "RemoveContainer" containerID="be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.825570 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:18 crc kubenswrapper[4705]: E0216 14:54:18.825939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829691 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.829706 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.854190 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.875220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.910680 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be146968fbedafb4ba682662ea4d0b6922fb0754a28ec5973cba026c5bf79e52\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:58Z\\\",\\\"message\\\":\\\"o:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:58.746356 6184 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:58.746428 6184 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:58.746452 6184 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.746465 6184 factory.go:656] Stopping watch factory\\\\nI0216 14:53:58.746498 6184 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:58.746520 6184 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:58.746534 6184 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:58.746548 6184 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:58.746562 6184 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:58.746876 6184 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:58.747298 6184 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.930485 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.932850 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.965983 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:18 crc kubenswrapper[4705]: I0216 14:54:18.981060 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.001828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.019923 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036171 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036275 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.036954 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.056607 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.073761 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.089629 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.108263 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.126484 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.138577 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.139166 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.150881 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.161693 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.174415 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.241723 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344122 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.344147 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.400638 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:26:03.84875393 +0000 UTC Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.419316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.419322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.419505 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.419593 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446794 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446890 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.446906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.549755 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.652820 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755426 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755501 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.755560 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.833897 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.837638 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:19 crc kubenswrapper[4705]: E0216 14:54:19.837778 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858250 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858418 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.858447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.859324 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.870584 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.883937 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.913111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.932318 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.949472 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.962091 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.971794 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:19 crc kubenswrapper[4705]: I0216 14:54:19.992247 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.014812 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.033747 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.055111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.065645 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.073906 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.089133 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.107263 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.126617 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.142580 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.155828 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167823 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167913 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.167961 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.182759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.270914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.270999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.271105 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.374637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.401715 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:07:11.534529006 +0000 UTC Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.419552 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:20 crc kubenswrapper[4705]: E0216 14:54:20.419789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.419823 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:20 crc kubenswrapper[4705]: E0216 14:54:20.420166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477684 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.477756 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580658 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.580728 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.683187 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.786992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.787009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.890212 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993468 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4705]: I0216 14:54:20.993521 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.097525 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.200939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.201163 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305002 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.305186 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.402836 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:33:47.034260063 +0000 UTC Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408648 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.408679 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.419163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.419224 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:21 crc kubenswrapper[4705]: E0216 14:54:21.419499 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:21 crc kubenswrapper[4705]: E0216 14:54:21.419709 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511736 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511763 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.511782 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615599 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.615739 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719186 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.719213 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823306 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.823500 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4705]: I0216 14:54:21.927588 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031194 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031229 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.031255 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134738 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.134860 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237195 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.237218 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.339829 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.403980 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:17:04.750099784 +0000 UTC Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.419463 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.419475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:22 crc kubenswrapper[4705]: E0216 14:54:22.419603 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:22 crc kubenswrapper[4705]: E0216 14:54:22.419710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.442666 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.546276 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.649912 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.753439 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855302 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.855428 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957768 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4705]: I0216 14:54:22.957783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060817 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.060829 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163228 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.163402 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266325 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.266349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.369980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.370007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.370028 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.404359 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:47:12.463928197 +0000 UTC Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.418757 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.418836 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.418911 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.419045 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.473939 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577257 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.577362 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.673621 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.693698 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698770 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.698816 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.718528 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722791 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.722833 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.741569 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.746257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.759911 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764679 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.764691 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.778826 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:23 crc kubenswrapper[4705]: E0216 14:54:23.779067 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781826 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.781914 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.885266 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4705]: I0216 14:54:23.988311 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.091651 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194556 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.194578 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.297951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400453 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.400517 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.404825 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:20:19.6378647 +0000 UTC Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.419293 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.419454 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:24 crc kubenswrapper[4705]: E0216 14:54:24.419575 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:24 crc kubenswrapper[4705]: E0216 14:54:24.419699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.504157 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607193 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.607234 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.710344 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813748 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.813867 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917537 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4705]: I0216 14:54:24.917579 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020286 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020360 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020420 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.020439 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.123982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.124000 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227334 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.227496 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.330263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.405790 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:00:23.56200549 +0000 UTC Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.419230 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:25 crc kubenswrapper[4705]: E0216 14:54:25.419475 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:25 crc kubenswrapper[4705]: E0216 14:54:25.419536 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.432979 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433051 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433072 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.433089 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.536581 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639145 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639203 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.639267 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.742818 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846272 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.846304 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4705]: I0216 14:54:25.949633 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052727 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052802 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052828 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.052844 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.155964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.156088 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258731 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258838 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.258856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361817 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361892 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.361951 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.406350 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:12:08.788241218 +0000 UTC Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.418876 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.418927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:26 crc kubenswrapper[4705]: E0216 14:54:26.419127 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:26 crc kubenswrapper[4705]: E0216 14:54:26.419283 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.452395 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465083 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465114 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.465134 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.475430 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.494157 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.535589 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.563099 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.567635 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.580170 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.607925 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.624911 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.644910 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.655806 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.669941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.669987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.670049 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.677487 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.694597 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.712570 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.737777 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.763335 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.773917 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.787733 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.810339 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.834694 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876937 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.876998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.877017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.877033 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980885 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4705]: I0216 14:54:26.980897 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.084160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187241 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.187278 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.290233 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.393270 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.407754 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:38:44.353861583 +0000 UTC Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.419115 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.419152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:27 crc kubenswrapper[4705]: E0216 14:54:27.419260 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:27 crc kubenswrapper[4705]: E0216 14:54:27.419460 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495598 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.495687 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.598995 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.701849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.701951 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702009 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.702097 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805709 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805840 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.805862 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4705]: I0216 14:54:27.909428 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012780 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012850 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.012915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117536 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117548 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.117584 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.221928 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.221991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222039 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.222059 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325261 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.325438 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.408748 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:59:30.123914105 +0000 UTC Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.419153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.419260 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:28 crc kubenswrapper[4705]: E0216 14:54:28.419450 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:28 crc kubenswrapper[4705]: E0216 14:54:28.419623 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428209 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428281 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428308 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428336 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.428357 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.530959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531064 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.531081 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.634393 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737419 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737498 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.737512 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.840936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.944954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4705]: I0216 14:54:28.945149 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048087 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048155 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.048219 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151299 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151355 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.151407 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.254318 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356968 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356979 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.356998 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.357012 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.409774 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:59:38.370027779 +0000 UTC Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.419319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.419319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:29 crc kubenswrapper[4705]: E0216 14:54:29.419522 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:29 crc kubenswrapper[4705]: E0216 14:54:29.419758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460512 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.460595 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563859 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563931 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.563949 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.666631 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770124 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.770189 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872690 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872767 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872790 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.872879 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976238 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4705]: I0216 14:54:29.976250 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078920 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.078971 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183683 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183760 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183846 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.183872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286529 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286573 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.286603 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.389116 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.410941 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:42:54.197311221 +0000 UTC Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:30 crc kubenswrapper[4705]: E0216 14:54:30.419537 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:30 crc kubenswrapper[4705]: E0216 14:54:30.419597 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.491113 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593485 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593546 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.593555 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695294 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695402 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695429 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.695449 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797414 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797446 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.797484 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4705]: I0216 14:54:30.898643 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000522 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.000602 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.102857 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205842 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.205852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308285 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.308361 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411170 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.411083 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:46:09.034383644 +0000 UTC Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.418332 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.418407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:31 crc kubenswrapper[4705]: E0216 14:54:31.418544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:31 crc kubenswrapper[4705]: E0216 14:54:31.418748 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514296 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.514358 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618341 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618444 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.618457 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721860 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721971 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.721991 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.825863 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4705]: I0216 14:54:31.932295 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.035804 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138482 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.138523 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240558 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240626 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240637 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.240667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342914 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342940 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.342976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.343001 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.411483 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:58:15.453429828 +0000 UTC Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.418824 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.418849 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.418993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.419120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.419823 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:32 crc kubenswrapper[4705]: E0216 14:54:32.419993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.445576 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547265 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547313 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.547341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649773 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.649810 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.752191 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855134 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.855147 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957798 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957822 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4705]: I0216 14:54:32.957840 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059735 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059785 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059797 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.059828 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162651 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162695 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.162737 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264906 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.264999 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367166 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367207 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367236 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.367248 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.411759 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:11:00.112468447 +0000 UTC Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.419066 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.419075 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.419217 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.419293 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.469956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.470000 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.470014 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.559764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.559981 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.560071 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:05.560045232 +0000 UTC m=+99.745022348 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572934 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572957 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.572984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.573003 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675249 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.675327 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.778616 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881178 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.881256 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.891465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.908437 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913204 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.913263 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.928263 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932587 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.932648 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.947110 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.952950 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953057 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.953076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.968990 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973202 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973318 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.973335 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.989980 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4705]: E0216 14:54:33.990220 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4705]: I0216 14:54:33.992283 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094589 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094675 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094708 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.094732 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197311 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.197342 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299278 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299312 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.299345 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.401939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.401986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.402034 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.412219 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:42:00.193167995 +0000 UTC Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.418590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.418590 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:34 crc kubenswrapper[4705]: E0216 14:54:34.418782 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:34 crc kubenswrapper[4705]: E0216 14:54:34.418699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504688 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.504756 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.607771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608541 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.608708 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711493 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711507 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.711529 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.813987 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814033 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.814069 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889509 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889562 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" exitCode=1 Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889593 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.889949 4705 scope.go:117] "RemoveContainer" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.902207 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.917904 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.931673 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.949759 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.961921 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.982865 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:34 crc kubenswrapper[4705]: I0216 14:54:34.997201 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.009999 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020208 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.020236 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.022601 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.037261 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.051095 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.061846 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.074849 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.090251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.102044 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.113738 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122932 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.122956 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.126348 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.141566 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.155181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225350 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.225415 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327910 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.327945 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.412880 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:56:30.909250174 +0000 UTC Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.419301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.419301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:35 crc kubenswrapper[4705]: E0216 14:54:35.419532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:35 crc kubenswrapper[4705]: E0216 14:54:35.419642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431218 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.431341 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534032 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.534107 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636876 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636899 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.636943 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.739671 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.739935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.740155 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.842856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843084 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.843387 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.893701 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.893954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.918047 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.935532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945244 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.945271 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.951564 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.973857 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4705]: I0216 14:54:35.990894 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.005068 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.020245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.039624 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047588 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.047637 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.055452 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.065876 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.080277 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.093592 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.103523 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.114875 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.131120 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150386 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150417 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.150431 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.152025 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.169977 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.182536 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252809 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252834 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.252852 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355644 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.355655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.413480 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:54:59.285337888 +0000 UTC Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.418860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.418944 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:36 crc kubenswrapper[4705]: E0216 14:54:36.419037 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:36 crc kubenswrapper[4705]: E0216 14:54:36.419219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.433818 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.446752 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460308 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460489 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.460563 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.477775 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.493232 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.508481 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.527571 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.553091 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563295 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563364 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.563456 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.568949 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.583129 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.611193 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.626088 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.640532 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.652562 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.661181 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.665613 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.672496 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.684393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.696236 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:36Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768393 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.768405 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871321 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.871466 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974783 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4705]: I0216 14:54:36.974794 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077185 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077223 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077247 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.077257 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179407 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.179440 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281486 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281502 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281523 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.281536 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384073 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.384115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.413857 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:59:12.772549505 +0000 UTC Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.419103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.419119 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:37 crc kubenswrapper[4705]: E0216 14:54:37.419202 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:37 crc kubenswrapper[4705]: E0216 14:54:37.419284 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485886 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.485926 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.618994 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.619099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721871 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.721944 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.824667 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4705]: I0216 14:54:37.926483 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028745 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.028856 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131080 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.131125 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.233945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.233993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.234031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.336442 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.414497 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:24:30.356675338 +0000 UTC Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.418927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.419084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:38 crc kubenswrapper[4705]: E0216 14:54:38.419242 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:38 crc kubenswrapper[4705]: E0216 14:54:38.419443 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438808 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438842 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438851 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.438876 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540800 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540807 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.540831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642622 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642657 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.642671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.744996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.745061 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847620 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.847671 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950562 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950585 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4705]: I0216 14:54:38.950598 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053537 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053592 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053605 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.053639 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156349 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156713 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.156733 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259478 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.259492 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362300 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.362396 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.415065 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:11:23.231542568 +0000 UTC Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.418416 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.418499 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:39 crc kubenswrapper[4705]: E0216 14:54:39.418545 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:39 crc kubenswrapper[4705]: E0216 14:54:39.418773 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465131 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465160 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.465176 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.567564 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670102 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.670175 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774217 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.774467 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877528 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.877631 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.980959 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4705]: I0216 14:54:39.981046 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.084363 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.186992 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187062 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.187072 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288746 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.288763 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391508 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.391529 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.416084 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:38:59.694745724 +0000 UTC Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.418483 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.418592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:40 crc kubenswrapper[4705]: E0216 14:54:40.418670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:40 crc kubenswrapper[4705]: E0216 14:54:40.418719 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494212 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494242 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494253 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.494282 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596097 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.596138 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.698961 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.699096 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802138 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.802155 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4705]: I0216 14:54:40.904713 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007381 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007422 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.007466 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.109936 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211858 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211868 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.211891 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315149 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.315185 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.416230 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:35:31.243981217 +0000 UTC Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.417983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418030 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418071 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418337 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.418423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:41 crc kubenswrapper[4705]: E0216 14:54:41.418532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:41 crc kubenswrapper[4705]: E0216 14:54:41.418612 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520142 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520164 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.520173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623661 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623674 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.623701 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726206 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.726223 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829294 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829363 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.829449 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931729 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4705]: I0216 14:54:41.931847 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033753 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033792 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.033828 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137457 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.137496 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242012 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242146 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.242174 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345849 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.345897 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.416968 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:25:27.783402991 +0000 UTC Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.419311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.419421 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:42 crc kubenswrapper[4705]: E0216 14:54:42.419474 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:42 crc kubenswrapper[4705]: E0216 14:54:42.419571 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448441 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448485 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448519 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.448532 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551582 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.551640 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654125 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654181 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.654240 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756515 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756575 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756590 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.756633 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.860593 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963606 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4705]: I0216 14:54:42.963664 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.067150 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170487 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.170514 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.274884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275636 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.275655 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378077 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.378137 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.417909 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:13:35.651529731 +0000 UTC Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.419231 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.419356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:43 crc kubenswrapper[4705]: E0216 14:54:43.419565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:43 crc kubenswrapper[4705]: E0216 14:54:43.419898 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.420202 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.481721 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585365 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585488 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.585505 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688177 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.688273 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791388 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791428 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.791447 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893732 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.893742 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.926092 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.928525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262"} Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.929007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.943752 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.957155 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.969512 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.980245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.991429 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.995839 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996003 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4705]: I0216 14:54:43.996207 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.005350 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.017842 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.031648 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.039530 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.044420 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.049501 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052433 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.052507 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.058955 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.064839 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068841 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.068919 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.075300 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.081076 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084543 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084591 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084615 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.084624 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.089663 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.100263 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105383 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105447 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.105587 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.107644 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.117584 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.118005 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.119993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.120060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.124178 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.141553 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.154895 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.190884 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.208968 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222702 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222726 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.222737 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.324831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418435 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418559 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:11:12.10023925 +0000 UTC Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.418656 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.418832 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.418995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427741 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427755 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.427785 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530435 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530481 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530514 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.530526 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.632954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633197 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633266 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633357 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.633495 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.735995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.736118 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.839896 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.934507 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.935334 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/2.log" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938445 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" exitCode=1 Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938501 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.938551 4705 scope.go:117] "RemoveContainer" containerID="93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.939723 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:54:44 crc kubenswrapper[4705]: E0216 14:54:44.940046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945403 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.945464 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.960611 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.983251 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:44 crc kubenswrapper[4705]: I0216 14:54:44.999325 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:44Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.014664 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049147 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.049339 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.057872 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93cce9fd0e0ba2d74977cd9879d088d81a8396982db822f9f65fd502d0258059\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:18Z\\\",\\\"message\\\":\\\"solver-bflhj\\\\nI0216 14:54:18.394445 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.394453 6374 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-bflhj in node crc\\\\nI0216 14:54:18.394459 6374 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-bflhj after 0 failed attempt(s)\\\\nI0216 14:54:18.394465 6374 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-bflhj\\\\nI0216 14:54:18.393824 6374 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394477 6374 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 14:54:18.394483 6374 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 14:54:18.394506 6374 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0216 14:54:18.393743 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.089156 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.107563 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.118220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.129406 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.141659 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151818 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.151916 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.154906 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.170690 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.185860 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.204124 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.220018 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.235190 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254881 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.254959 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.256498 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.269036 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358347 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358465 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.358545 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419074 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:28:55.092645435 +0000 UTC Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419287 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.419321 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.419687 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.419541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462050 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462103 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462120 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.462129 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.564952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565011 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565023 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.565085 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668019 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668107 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668131 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.668148 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770828 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770873 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770889 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.770928 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873824 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873870 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.873890 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.943101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.946976 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:54:45 crc kubenswrapper[4705]: E0216 14:54:45.947147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.967162 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977061 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977427 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977456 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.977465 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4705]: I0216 14:54:45.986362 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.002217 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.019361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.036572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.057093 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.074549 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080595 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080611 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.080623 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.105139 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.124270 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.145188 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.164363 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.179449 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183423 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.183433 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.199837 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.222410 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.237486 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.252328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.266847 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.277872 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285714 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285751 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285759 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.285783 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387877 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387941 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387954 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387976 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.387987 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.418744 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.418867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:46 crc kubenswrapper[4705]: E0216 14:54:46.418958 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:46 crc kubenswrapper[4705]: E0216 14:54:46.419067 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.419252 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:23:40.150111069 +0000 UTC Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.440111 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.455255 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.469241 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.486361 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491264 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491526 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.491789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.492031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.502424 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.515547 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.536393 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.563432 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.581068 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595662 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595687 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595719 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.595748 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.597593 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.615587 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.633245 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.653848 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.673760 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.698776 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699251 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.699535 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.706719 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.725055 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.745960 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.777224 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:46Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802463 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802550 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.802627 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906168 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906262 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906292 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4705]: I0216 14:54:46.906311 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009167 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.009178 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.112494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.215977 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216056 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216076 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216111 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.216132 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319196 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.319259 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:47 crc kubenswrapper[4705]: E0216 14:54:47.419594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:47 crc kubenswrapper[4705]: E0216 14:54:47.419736 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.419716 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:20:11.01827973 +0000 UTC Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422182 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422225 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.422248 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525637 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525706 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.525774 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628597 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628623 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.628648 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731307 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.731344 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834135 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.834194 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936919 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936962 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.936991 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4705]: I0216 14:54:47.937005 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039836 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039900 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.039979 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142323 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142454 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.142504 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245474 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245520 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245530 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.245560 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.348922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349044 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.349061 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.418833 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.418834 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:48 crc kubenswrapper[4705]: E0216 14:54:48.419020 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:48 crc kubenswrapper[4705]: E0216 14:54:48.419252 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.421072 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:15:45.803614996 +0000 UTC Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451814 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451879 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451896 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.451940 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.554927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.554999 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.555064 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658500 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658545 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658555 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.658585 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762075 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.762107 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864342 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864449 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864460 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.864489 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966320 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966458 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4705]: I0216 14:54:48.966477 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068495 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068539 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068553 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068574 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.068588 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171219 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171273 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171284 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.171316 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273743 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273761 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.273775 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376259 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376345 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.376360 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.418836 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:49 crc kubenswrapper[4705]: E0216 14:54:49.418979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.418850 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:49 crc kubenswrapper[4705]: E0216 14:54:49.419125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.422017 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:31:31.293416382 +0000 UTC Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478973 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.478996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.479021 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.479040 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581344 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.581406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.683942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684010 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.684071 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787455 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787540 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787567 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.787587 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890279 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890338 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890352 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890401 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.890417 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993535 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993554 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993586 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4705]: I0216 14:54:49.993610 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096439 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096510 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096531 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096560 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.096583 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200112 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.200167 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.265503 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.265710 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.265675221 +0000 UTC m=+148.450652327 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303175 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303211 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303245 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.303267 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367322 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.367405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367501 4705 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367532 4705 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367659 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367706 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367737 4705 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367548 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367534753 +0000 UTC m=+148.552511819 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367669 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367873 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367854302 +0000 UTC m=+148.552831388 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367886 4705 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367906 4705 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.367965 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.367944834 +0000 UTC m=+148.552921990 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.368009 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.368000186 +0000 UTC m=+148.552977372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.406937 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.418415 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.418430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.418650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:50 crc kubenswrapper[4705]: E0216 14:54:50.418862 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.422511 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:07:31.343761565 +0000 UTC Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509686 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509787 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.509831 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.612967 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613024 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613093 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.613115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717085 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717173 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717192 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717226 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.717249 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820130 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820159 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.820172 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924136 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924214 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924234 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924268 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4705]: I0216 14:54:50.924287 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027476 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027602 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.027620 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132022 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132063 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132074 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132089 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.132099 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235580 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235641 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235685 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.235704 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.337909 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.337990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338048 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.338073 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.419558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.419630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:51 crc kubenswrapper[4705]: E0216 14:54:51.419935 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:51 crc kubenswrapper[4705]: E0216 14:54:51.420107 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.423571 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:46:33.137542328 +0000 UTC Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440707 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440733 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.440794 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544387 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544436 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544470 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.544482 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648712 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648740 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.648761 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752289 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752343 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.752425 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.855972 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856041 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856059 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.856104 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959239 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959317 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959339 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959366 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4705]: I0216 14:54:51.959420 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.061944 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.061990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062001 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.062031 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164143 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164152 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.164173 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266542 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266603 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266621 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266645 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.266663 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369180 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.369240 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.419191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.419206 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:52 crc kubenswrapper[4705]: E0216 14:54:52.419442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:52 crc kubenswrapper[4705]: E0216 14:54:52.419580 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.423723 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:38:35.268963503 +0000 UTC Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472409 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472503 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.472522 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580015 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580170 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.580299 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684703 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684779 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684833 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.684887 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788410 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.788455 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891412 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891479 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891494 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.891525 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993045 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993053 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4705]: I0216 14:54:52.993076 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095081 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095119 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095127 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.095150 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196820 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196933 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196945 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.196974 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.298923 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.298984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.299045 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401036 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401088 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401123 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.401176 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.418496 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.418538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:53 crc kubenswrapper[4705]: E0216 14:54:53.418609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:53 crc kubenswrapper[4705]: E0216 14:54:53.418685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.424706 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:00:19.884701724 +0000 UTC Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503963 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.503994 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607105 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607163 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.607174 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.709975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710035 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710070 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710095 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.710114 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812692 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812784 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812835 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.812854 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915079 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915150 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915172 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915200 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4705]: I0216 14:54:53.915223 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.017938 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018014 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018037 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018067 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.018090 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120176 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120275 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.120326 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149316 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149333 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.149342 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.166937 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.170985 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171016 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171028 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.171052 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.187565 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.190965 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191004 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.191039 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.207118 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210324 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210385 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210394 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210408 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.210419 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.223177 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226815 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226832 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.226848 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.238706 4705 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c4ce382a-96e5-4027-9451-936b39edc61d\\\",\\\"systemUUID\\\":\\\"e0a92891-331c-4cfd-852e-c93d09da3492\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:54Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.238841 4705 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240082 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240116 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240129 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.240139 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342916 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342975 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.342995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.343009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.418452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.418462 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.418621 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:54 crc kubenswrapper[4705]: E0216 14:54:54.418837 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.425153 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:51:49.841610125 +0000 UTC Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445935 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445969 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.445997 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549330 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549346 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549396 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.549413 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651504 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651576 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651631 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.651657 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754769 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754831 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754848 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.754862 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857805 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857861 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857880 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857907 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.857924 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961162 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:54 crc kubenswrapper[4705]: I0216 14:54:54.961289 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:54Z","lastTransitionTime":"2026-02-16T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063867 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063887 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.063922 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166327 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166389 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166400 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166415 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.166426 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269179 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269198 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.269242 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372221 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372301 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.372313 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.419272 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.419266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:55 crc kubenswrapper[4705]: E0216 14:54:55.419611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:55 crc kubenswrapper[4705]: E0216 14:54:55.419706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.425934 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:51:27.415029713 +0000 UTC Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475788 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475801 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475816 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.475827 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578565 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578639 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578659 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578682 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.578700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681771 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681812 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681823 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681839 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.681849 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784054 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784101 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784140 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.784156 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886966 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.886993 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989725 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989765 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989774 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989789 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:55 crc kubenswrapper[4705]: I0216 14:54:55.989801 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:55Z","lastTransitionTime":"2026-02-16T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093029 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093121 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093153 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093189 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.093214 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196154 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196235 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196258 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196291 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.196310 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299521 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299652 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299680 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.299700 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403559 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403643 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403697 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.403717 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.419054 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:56 crc kubenswrapper[4705]: E0216 14:54:56.419234 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.419054 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:56 crc kubenswrapper[4705]: E0216 14:54:56.419666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.426723 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:09:04.606227906 +0000 UTC Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.445220 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12ed47ef-848c-42c1-b5a9-b55f0cb2c89d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9901e7b8877eac686ae1cfdf62fc70b469955c570b625e48dead383d981b3ffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffc98f7f23b910b10f95dfabee1afbb2fac65c790b72df4bdabd60a72a111d4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c9c7408d75f7f8b1abd57a6fe9495bab4aaf539eb561a4baabb2d3795ec081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9992bddec820de07b251de11e0cd2e402d40181c009324a94d14092ffe2620b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024e3609e731a65475fca162b705bc845e4eba07c3d5ad503284612cf018f4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f0e0fe5d3f01faf0e6b7aee8f6487804c2aa37b14494bcee84e5a0b6b6e3b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6da3c106b6d2e2dd597fc02c10deb233fc52e34df2731fd5cccf4db460087bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://281ddb2822662b73715967a0a80191a005d75a6b58461796c442430d4cd80cbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.466758 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac056ed572bbc689a768d351c7257e17ef69db392e62b8e422ea003039ad4749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.488038 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56d457dcb886d8714d3ad4602ba85cdc5ccf9870d2ac6dcfcab4b0e6756e8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420bd0ade803c761a2455ef874903ef672c06eaeb354da359a6e858b227ac19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507665 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507689 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.507743 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.517728 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59e81100-8761-4e5f-bab6-07df1c795ccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:44Z\\\",\\\"message\\\":\\\" handler 8\\\\nI0216 14:54:44.267218 6717 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:54:44.267219 6717 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:54:44.267222 6717 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:54:44.267228 6717 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:54:44.267241 6717 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:54:44.267252 6717 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:54:44.267265 6717 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:54:44.267273 6717 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:54:44.267272 6717 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267287 6717 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:54:44.267307 6717 factory.go:656] Stopping watch factory\\\\nI0216 14:54:44.267327 6717 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:54:44.267347 6717 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 14:54:44.267383 6717 ovnkube.go:599] Stopped ovnkube\\\\nI0216 14:54:44.267401 6717 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 14:54:44.267461 6717 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67wc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tshhr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.544315 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c123cf-24a7-44ec-a502-902632334b01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://031056fd0c02b2293c901ee94ec220ad5567435fedce13d2b4462ff54de17a08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29426113491bdead8a588a107f413c7d19c00396555160d442526fd3ad2f787e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aabcc8a974de01b865dbdcda4fff9f8c01b9b8ab7d5722355963ffc8213dd08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f985af960a9e55f4372f7a8d94f7ad7ae7f1b8d4b8d4fb2e262e9afce16003f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.565586 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.583003 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bflhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f9230c-7ded-46f1-babb-eba339b0ca6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa56e7e0eebbb6021222a2accb84d7f83c835b781098afa8bc91574193adf9a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdgkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bflhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.596801 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb10030d5f534441e4a328f0b690194afde7463f8e78609804e55b833c09dc25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zm7v9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fnnf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610484 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610557 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610604 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.610622 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.613074 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f7zct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35c89f5-2045-4451-b301-44615b5f73e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d31f0b51b60b152cc13203d917e571a4d4537d021e1f97bc883f0a4e86759d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5rvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f7zct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.629733 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8m64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67dea3c6-e6a4-4078-9bf2-6928c39f498b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fdqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8m64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.650137 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60a9a247-f180-4ddd-8577-40f4cfa074da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:40.084292 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:40.086690 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2449020606/tls.crt::/tmp/serving-cert-2449020606/tls.key\\\\\\\"\\\\nI0216 14:53:45.887172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:45.891938 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:45.891991 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:45.892031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:45.892044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:45.902389 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:45.902430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:45.902442 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:45.902446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:45.902454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:45.902458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:45.902549 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:45.906142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.670235 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.691328 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33860ee2-697c-4950-af95-26d7916c0a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d586b42bd0e419ac1e9c414c214de2008d64feae035110df5ea937dc7a0b14ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0932c044b26e2e3fd4f079df13ae1847ed05abe19e5f9353fa3e48bee6387bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txvxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7lk66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.710864 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715513 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715551 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715564 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715584 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.715599 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.733572 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2ljf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec06562-0237-4709-9469-033783d7d545\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:34Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18\\\\n2026-02-16T14:53:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_68cc7798-7f2c-417a-9948-507e906aef18 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:49Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:49Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6vhrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2ljf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.763592 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48761d4f-98a4-435f-ae5e-6cdb58dbc4a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78bd84164137a0ef34367eb45bb832650e7ca8a7d661e3b1a03f43a089533af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d907c87e06a1a19367afffa4fcf6cedd93fb525ead9c821c7130489a2146a18e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34dbfc4a2fa2fe58e689c69ea8e46e475e00088d4705c0c8d01be3494656b8e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d245744723afc008f61525de236132beeb61be498ff16e0b513428272807b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e656c510d5cc2328e11e275244efde21f882f95a923f04a96b2171e3aaf91a7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b784521656a65b6055a09ffe851f0c237b2601992cab86890e0b3f33b9c2bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://202923adce1d5ea1dda525d1bd542f443d353f372724402fc42522ccd691519b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9gbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rwkxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.789038 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89c3c028-cf29-410c-9082-4bb40d083e09\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f2546292c0d4ae2703c6edbc8f52a2c7b709173c7dfc814d4d20707ba4ad26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bc04690e907533a2fcd93373f1fa505fcfaf7c414095ece4805ae479673baf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc03dde6354225a10e0fab002db68a4c31bc2c99ed7d668752d1de28a984736e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.802706 4705 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16c2f281be16550160c609f81d84a212ded4df9a0df89c27a4939ede3da1bb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:56Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818563 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818716 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818730 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818749 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.818762 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922038 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:56 crc kubenswrapper[4705]: I0216 14:54:56.922143 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:56Z","lastTransitionTime":"2026-02-16T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.024853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025293 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025497 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025701 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.025847 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.128772 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129139 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.129157 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232213 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232255 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.232271 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335882 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335917 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.335930 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.419388 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.419411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:57 crc kubenswrapper[4705]: E0216 14:54:57.419966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:57 crc kubenswrapper[4705]: E0216 14:54:57.419831 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.428324 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:35:02.581104608 +0000 UTC Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438469 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438506 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438516 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.438542 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540405 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540440 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540467 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.540479 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643277 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.643922 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.644137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.644291 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747762 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747863 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747894 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.747915 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852538 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852612 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852633 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852667 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.852692 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955552 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955630 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955655 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:57 crc kubenswrapper[4705]: I0216 14:54:57.955674 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:57Z","lastTransitionTime":"2026-02-16T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.058927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.059638 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.059856 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.060026 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.060170 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164161 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164216 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164231 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164254 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.164270 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266827 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266905 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266955 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.266976 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370607 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370625 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370654 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.370672 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.419024 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:58 crc kubenswrapper[4705]: E0216 14:54:58.419212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.419431 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:58 crc kubenswrapper[4705]: E0216 14:54:58.419717 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.429575 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:42:52.831844725 +0000 UTC Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.437655 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.473942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.473989 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474006 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474031 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.474051 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577681 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577739 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577756 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577781 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.577801 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680425 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680723 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680813 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.680926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.681009 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784165 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784220 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784237 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784267 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.784292 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.887653 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888007 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888201 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888335 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.888520 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.991982 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992052 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992071 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992098 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:58 crc kubenswrapper[4705]: I0216 14:54:58.992115 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:58Z","lastTransitionTime":"2026-02-16T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.094737 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.094980 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095115 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.095181 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198132 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198148 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.198183 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.300964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301013 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301047 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.301060 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.403984 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404046 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404065 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404090 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.404108 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:54:59 crc kubenswrapper[4705]: E0216 14:54:59.419365 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.419243 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:59 crc kubenswrapper[4705]: E0216 14:54:59.419808 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.430565 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:42:18.909706534 +0000 UTC Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506572 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506632 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506650 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506672 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.506689 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609926 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609974 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.609986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.610005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.610018 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713752 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713811 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713829 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713855 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.713872 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817571 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.817640 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920711 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920764 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920782 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920806 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:59 crc kubenswrapper[4705]: I0216 14:54:59.920824 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:59Z","lastTransitionTime":"2026-02-16T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023058 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023091 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023100 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023113 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.023123 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125766 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125777 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125796 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.125807 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228461 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228473 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228490 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.228503 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331270 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331310 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331332 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331358 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.331399 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.418623 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.418640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.418999 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.419698 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.419958 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:00 crc kubenswrapper[4705]: E0216 14:55:00.420200 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.431498 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:47:45.645766575 +0000 UTC Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433243 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433298 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433322 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433351 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.433406 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536069 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.536128 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639199 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639260 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639271 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639288 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.639299 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.742939 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743078 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743108 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.743160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846869 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846958 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.846983 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.847018 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.847045 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950060 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950109 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950151 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:00 crc kubenswrapper[4705]: I0216 14:55:00.950169 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:00Z","lastTransitionTime":"2026-02-16T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.052819 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053252 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053443 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053581 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.053706 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.156921 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.156996 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157017 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157049 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.157074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259853 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259908 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259924 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259947 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.259963 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363821 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363864 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363895 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.363908 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.418528 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.418547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:01 crc kubenswrapper[4705]: E0216 14:55:01.419199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:01 crc kubenswrapper[4705]: E0216 14:55:01.419533 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.431816 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:41:23.248398244 +0000 UTC Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467353 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467432 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467451 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467477 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.467494 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.570942 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571008 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571027 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571055 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.571074 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674601 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674660 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674677 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674705 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.674723 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778434 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778493 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778509 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778533 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.778551 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881534 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881583 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881600 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881624 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.881644 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984617 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984676 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984694 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984721 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:01 crc kubenswrapper[4705]: I0216 14:55:01.984739 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:01Z","lastTransitionTime":"2026-02-16T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088518 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088594 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088613 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088647 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.088670 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192841 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192946 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192964 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.192993 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.193011 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.296911 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.296986 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297005 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297034 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.297056 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400628 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400699 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400717 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400742 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.400765 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.419302 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.419414 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:02 crc kubenswrapper[4705]: E0216 14:55:02.419611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:02 crc kubenswrapper[4705]: E0216 14:55:02.419784 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.432919 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:10:00.230677008 +0000 UTC Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504303 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504362 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504413 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504442 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.504461 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606844 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606875 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606884 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606897 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.606906 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709044 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709094 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709106 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.709138 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812549 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812616 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812635 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812663 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.812682 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916795 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916927 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916956 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.916995 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:02 crc kubenswrapper[4705]: I0216 14:55:02.917020 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:02Z","lastTransitionTime":"2026-02-16T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020068 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020125 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020169 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.020187 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124128 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124205 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124224 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124256 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.124280 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227325 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227431 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227452 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227483 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.227508 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330830 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330903 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330925 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330952 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.330974 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.419048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.419162 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:03 crc kubenswrapper[4705]: E0216 14:55:03.419350 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:03 crc kubenswrapper[4705]: E0216 14:55:03.419945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.433084 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:00:08.450216115 +0000 UTC Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434448 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434511 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434532 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434561 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.434584 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538315 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538398 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538416 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538459 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.538469 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641437 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641488 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641505 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641527 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.641546 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744331 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744430 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744475 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.744493 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847803 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847874 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847893 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.847990 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.848011 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951450 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951524 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951547 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951578 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:03 crc kubenswrapper[4705]: I0216 14:55:03.951601 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:03Z","lastTransitionTime":"2026-02-16T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054656 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054715 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054734 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054758 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.054776 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159227 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159283 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159304 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159329 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.159349 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262786 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262847 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262866 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262891 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.262913 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366042 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366104 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366118 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366141 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.366160 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.419021 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.419051 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:04 crc kubenswrapper[4705]: E0216 14:55:04.419163 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:04 crc kubenswrapper[4705]: E0216 14:55:04.419288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.433598 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:04:48.615423322 +0000 UTC Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469086 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469126 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469137 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469157 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.469168 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572184 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572222 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572232 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572248 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.572260 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586287 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586340 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586361 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586411 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.586433 4705 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:55:04Z","lastTransitionTime":"2026-02-16T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.655034 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9"] Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.655640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.659946 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.660488 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.660712 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.661209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.718161 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2ljf7" podStartSLOduration=77.718120725 podStartE2EDuration="1m17.718120725s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.71683325 +0000 UTC m=+98.901810346" watchObservedRunningTime="2026-02-16 14:55:04.718120725 +0000 UTC m=+98.903097831" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734137 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734207 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734248 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.734301 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.759609 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7lk66" podStartSLOduration=77.759590056 podStartE2EDuration="1m17.759590056s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.735408051 +0000 UTC m=+98.920385177" watchObservedRunningTime="2026-02-16 14:55:04.759590056 +0000 UTC m=+98.944567132" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.777853 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.777822858 podStartE2EDuration="1m11.777822858s" podCreationTimestamp="2026-02-16 14:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.75972956 +0000 UTC m=+98.944706656" watchObservedRunningTime="2026-02-16 14:55:04.777822858 +0000 UTC m=+98.962799934" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.799391 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rwkxz" podStartSLOduration=77.79934011 podStartE2EDuration="1m17.79934011s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.799296479 +0000 UTC m=+98.984273575" watchObservedRunningTime="2026-02-16 14:55:04.79934011 +0000 UTC m=+98.984317186" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835087 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835171 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835217 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835243 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.835313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.836705 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef894106-ff89-4de4-8647-9e48b9e5cc87-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.837601 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef894106-ff89-4de4-8647-9e48b9e5cc87-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.844996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef894106-ff89-4de4-8647-9e48b9e5cc87-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.859404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef894106-ff89-4de4-8647-9e48b9e5cc87-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jdns9\" (UID: \"ef894106-ff89-4de4-8647-9e48b9e5cc87\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.914478 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.914452657 podStartE2EDuration="48.914452657s" podCreationTimestamp="2026-02-16 14:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.914139528 +0000 UTC m=+99.099116614" watchObservedRunningTime="2026-02-16 14:55:04.914452657 +0000 UTC m=+99.099429733" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.914628 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.914622041 podStartE2EDuration="6.914622041s" podCreationTimestamp="2026-02-16 14:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.899709281 +0000 UTC m=+99.084686397" watchObservedRunningTime="2026-02-16 14:55:04.914622041 +0000 UTC m=+99.099599117" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.951710 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=74.951684961 podStartE2EDuration="1m14.951684961s" podCreationTimestamp="2026-02-16 14:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.949665275 +0000 UTC m=+99.134642351" watchObservedRunningTime="2026-02-16 14:55:04.951684961 +0000 UTC m=+99.136662047" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.968319 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bflhj" podStartSLOduration=78.968288998 podStartE2EDuration="1m18.968288998s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.967020263 +0000 UTC m=+99.151997349" watchObservedRunningTime="2026-02-16 14:55:04.968288998 +0000 UTC m=+99.153266074" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.978812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" Feb 16 14:55:04 crc kubenswrapper[4705]: I0216 14:55:04.982911 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podStartSLOduration=77.98289354 podStartE2EDuration="1m17.98289354s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.982104608 +0000 UTC m=+99.167081684" watchObservedRunningTime="2026-02-16 14:55:04.98289354 +0000 UTC m=+99.167870616" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:04.999873 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f7zct" podStartSLOduration=78.999850566 podStartE2EDuration="1m18.999850566s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:04.999607869 +0000 UTC m=+99.184584945" watchObservedRunningTime="2026-02-16 14:55:04.999850566 +0000 UTC m=+99.184827642" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.020450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" event={"ID":"ef894106-ff89-4de4-8647-9e48b9e5cc87","Type":"ContainerStarted","Data":"d20a2193c566d721c691c1f410419cb5f015624e6ea03badf14643b0fac75d43"} Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.034868 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.034831338 podStartE2EDuration="1m19.034831338s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:05.034322814 +0000 UTC m=+99.219299890" watchObservedRunningTime="2026-02-16 14:55:05.034831338 +0000 UTC m=+99.219808454" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.419153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.419202 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.420256 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.420459 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.434803 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:57:50.143387239 +0000 UTC Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.434954 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.448740 4705 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:55:05 crc kubenswrapper[4705]: I0216 14:55:05.643266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.643932 4705 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:55:05 crc kubenswrapper[4705]: E0216 14:55:05.644281 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs podName:67dea3c6-e6a4-4078-9bf2-6928c39f498b nodeName:}" failed. No retries permitted until 2026-02-16 14:56:09.644237083 +0000 UTC m=+163.829214209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs") pod "network-metrics-daemon-8m64f" (UID: "67dea3c6-e6a4-4078-9bf2-6928c39f498b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.026539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" event={"ID":"ef894106-ff89-4de4-8647-9e48b9e5cc87","Type":"ContainerStarted","Data":"27a666e462e08046e0e9af84e427b12984703efc253673d45376df706cdbf47b"} Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.418673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:06 crc kubenswrapper[4705]: I0216 14:55:06.418673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:06 crc kubenswrapper[4705]: E0216 14:55:06.420942 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:06 crc kubenswrapper[4705]: E0216 14:55:06.421170 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:07 crc kubenswrapper[4705]: I0216 14:55:07.419571 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:07 crc kubenswrapper[4705]: I0216 14:55:07.419772 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:07 crc kubenswrapper[4705]: E0216 14:55:07.419955 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:07 crc kubenswrapper[4705]: E0216 14:55:07.420144 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:08 crc kubenswrapper[4705]: I0216 14:55:08.419365 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:08 crc kubenswrapper[4705]: I0216 14:55:08.419484 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:08 crc kubenswrapper[4705]: E0216 14:55:08.419668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:08 crc kubenswrapper[4705]: E0216 14:55:08.419854 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:09 crc kubenswrapper[4705]: I0216 14:55:09.419430 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:09 crc kubenswrapper[4705]: I0216 14:55:09.419461 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:09 crc kubenswrapper[4705]: E0216 14:55:09.419635 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:09 crc kubenswrapper[4705]: E0216 14:55:09.419921 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:10 crc kubenswrapper[4705]: I0216 14:55:10.418322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:10 crc kubenswrapper[4705]: E0216 14:55:10.418454 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:10 crc kubenswrapper[4705]: I0216 14:55:10.418588 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:10 crc kubenswrapper[4705]: E0216 14:55:10.418769 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.419109 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.419113 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.419734 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.419982 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:11 crc kubenswrapper[4705]: I0216 14:55:11.420250 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:11 crc kubenswrapper[4705]: E0216 14:55:11.420792 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tshhr_openshift-ovn-kubernetes(59e81100-8761-4e5f-bab6-07df1c795ccb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" Feb 16 14:55:12 crc kubenswrapper[4705]: I0216 14:55:12.419245 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:12 crc kubenswrapper[4705]: I0216 14:55:12.419308 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:12 crc kubenswrapper[4705]: E0216 14:55:12.419542 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:12 crc kubenswrapper[4705]: E0216 14:55:12.419607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:13 crc kubenswrapper[4705]: I0216 14:55:13.419051 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:13 crc kubenswrapper[4705]: I0216 14:55:13.419222 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:13 crc kubenswrapper[4705]: E0216 14:55:13.419601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:13 crc kubenswrapper[4705]: E0216 14:55:13.419903 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:14 crc kubenswrapper[4705]: I0216 14:55:14.418847 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:14 crc kubenswrapper[4705]: I0216 14:55:14.418854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:14 crc kubenswrapper[4705]: E0216 14:55:14.419056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:14 crc kubenswrapper[4705]: E0216 14:55:14.419267 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:15 crc kubenswrapper[4705]: I0216 14:55:15.418662 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:15 crc kubenswrapper[4705]: E0216 14:55:15.418835 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:15 crc kubenswrapper[4705]: I0216 14:55:15.419487 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:15 crc kubenswrapper[4705]: E0216 14:55:15.419649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:16 crc kubenswrapper[4705]: I0216 14:55:16.419105 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:16 crc kubenswrapper[4705]: I0216 14:55:16.419212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:16 crc kubenswrapper[4705]: E0216 14:55:16.421398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:16 crc kubenswrapper[4705]: E0216 14:55:16.421496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:17 crc kubenswrapper[4705]: I0216 14:55:17.419138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:17 crc kubenswrapper[4705]: I0216 14:55:17.419191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:17 crc kubenswrapper[4705]: E0216 14:55:17.419437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:17 crc kubenswrapper[4705]: E0216 14:55:17.419646 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:18 crc kubenswrapper[4705]: I0216 14:55:18.419359 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:18 crc kubenswrapper[4705]: E0216 14:55:18.419595 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:18 crc kubenswrapper[4705]: I0216 14:55:18.419921 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:18 crc kubenswrapper[4705]: E0216 14:55:18.420067 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:19 crc kubenswrapper[4705]: I0216 14:55:19.418619 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:19 crc kubenswrapper[4705]: I0216 14:55:19.418695 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:19 crc kubenswrapper[4705]: E0216 14:55:19.419242 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:19 crc kubenswrapper[4705]: E0216 14:55:19.419281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:20 crc kubenswrapper[4705]: I0216 14:55:20.418596 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:20 crc kubenswrapper[4705]: E0216 14:55:20.418705 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:20 crc kubenswrapper[4705]: I0216 14:55:20.419086 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:20 crc kubenswrapper[4705]: E0216 14:55:20.419431 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.091186 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092087 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/0.log" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092211 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" exitCode=1 Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105"} Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092341 4705 scope.go:117] "RemoveContainer" containerID="341d06afe3af79c16423d1968a7c3658608118a7ba518607e935afdb49850f7f" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.092978 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.093288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.134519 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jdns9" podStartSLOduration=94.134497055 podStartE2EDuration="1m34.134497055s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:06.051458486 +0000 UTC m=+100.236435562" watchObservedRunningTime="2026-02-16 14:55:21.134497055 +0000 UTC m=+115.319474171" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.418680 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.419108 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:21 crc kubenswrapper[4705]: I0216 14:55:21.418724 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:21 crc kubenswrapper[4705]: E0216 14:55:21.419446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.103255 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.418566 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4705]: E0216 14:55:22.418708 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:22 crc kubenswrapper[4705]: I0216 14:55:22.419781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:22 crc kubenswrapper[4705]: E0216 14:55:22.420689 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:23 crc kubenswrapper[4705]: I0216 14:55:23.419173 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:23 crc kubenswrapper[4705]: I0216 14:55:23.419177 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:23 crc kubenswrapper[4705]: E0216 14:55:23.419950 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:23 crc kubenswrapper[4705]: E0216 14:55:23.420082 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.419442 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:24 crc kubenswrapper[4705]: E0216 14:55:24.419865 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.419981 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:24 crc kubenswrapper[4705]: E0216 14:55:24.420607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:24 crc kubenswrapper[4705]: I0216 14:55:24.421255 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.116711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.119419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerStarted","Data":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.120035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.418423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.418551 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:25 crc kubenswrapper[4705]: E0216 14:55:25.418925 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:25 crc kubenswrapper[4705]: E0216 14:55:25.419220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.476015 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podStartSLOduration=98.475974281 podStartE2EDuration="1m38.475974281s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:25.180937384 +0000 UTC m=+119.365914530" watchObservedRunningTime="2026-02-16 14:55:25.475974281 +0000 UTC m=+119.660951407" Feb 16 14:55:25 crc kubenswrapper[4705]: I0216 14:55:25.477941 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.122579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.122906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.418677 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:26 crc kubenswrapper[4705]: I0216 14:55:26.418703 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.419901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.420019 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.447615 4705 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 14:55:26 crc kubenswrapper[4705]: E0216 14:55:26.545220 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:27 crc kubenswrapper[4705]: I0216 14:55:27.419265 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:27 crc kubenswrapper[4705]: E0216 14:55:27.419477 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.418994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.419060 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:28 crc kubenswrapper[4705]: I0216 14:55:28.418994 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:28 crc kubenswrapper[4705]: E0216 14:55:28.419582 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:29 crc kubenswrapper[4705]: I0216 14:55:29.418829 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:29 crc kubenswrapper[4705]: E0216 14:55:29.419254 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:30 crc kubenswrapper[4705]: I0216 14:55:30.419611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419700 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:30 crc kubenswrapper[4705]: E0216 14:55:30.419802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:31 crc kubenswrapper[4705]: I0216 14:55:31.418838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:31 crc kubenswrapper[4705]: E0216 14:55:31.419023 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:31 crc kubenswrapper[4705]: E0216 14:55:31.546841 4705 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419017 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:32 crc kubenswrapper[4705]: I0216 14:55:32.419221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:32 crc kubenswrapper[4705]: E0216 14:55:32.419610 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:33 crc kubenswrapper[4705]: I0216 14:55:33.419121 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:33 crc kubenswrapper[4705]: E0216 14:55:33.419715 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.418701 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418829 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.418851 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.419160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:34 crc kubenswrapper[4705]: I0216 14:55:34.419401 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 14:55:34 crc kubenswrapper[4705]: E0216 14:55:34.419661 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.164766 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.164825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6"} Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.418474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:35 crc kubenswrapper[4705]: E0216 14:55:35.418678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:35 crc kubenswrapper[4705]: I0216 14:55:35.773321 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.418878 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.419017 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.419840 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8m64f" podUID="67dea3c6-e6a4-4078-9bf2-6928c39f498b" Feb 16 14:55:36 crc kubenswrapper[4705]: I0216 14:55:36.419848 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.420113 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:36 crc kubenswrapper[4705]: E0216 14:55:36.420398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.418633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.421777 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:55:37 crc kubenswrapper[4705]: I0216 14:55:37.423703 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.418897 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422212 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422499 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422635 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:55:38 crc kubenswrapper[4705]: I0216 14:55:38.422783 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.446902 4705 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.505928 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.506750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515287 4705 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515354 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515482 4705 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515509 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515570 4705 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515592 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515645 4705 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515663 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.515778 4705 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.515805 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: W0216 14:55:45.518344 4705 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Feb 16 14:55:45 crc kubenswrapper[4705]: E0216 14:55:45.518423 4705 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.524151 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.524762 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.525169 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.525677 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530078 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530569 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.530946 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.531789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.532200 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.532518 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.539897 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540272 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540298 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540657 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540717 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540751 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.540966 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541327 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541379 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541418 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541436 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541452 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541468 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541706 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.541747 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.543888 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.544472 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.545196 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.550233 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.550518 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.551226 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.552467 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.552597 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.553116 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.556140 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.556656 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557056 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557225 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557470 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.557612 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.558248 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559334 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559603 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559804 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.559948 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.560364 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.596522 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597523 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597754 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597906 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.597967 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.598963 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.600954 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.601192 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.604575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637169 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637206 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637674 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.637842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638081 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638247 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638563 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638714 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.638877 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639083 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639386 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639490 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639654 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.639977 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640018 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640533 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640583 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640760 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640859 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.640936 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641002 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641063 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641140 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641164 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641281 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641759 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.641870 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642274 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642418 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642581 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642781 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642906 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643292 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643430 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643536 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643643 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.643986 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644503 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.644786 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.648206 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.648458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.642419 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649043 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649280 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649597 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649646 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649688 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649714 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649807 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649829 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649897 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649926 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649956 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650171 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650815 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.649958 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650939 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.650988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651061 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651135 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651162 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651185 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651327 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651503 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651544 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651559 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651614 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651683 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.651983 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.652728 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.652952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.654238 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.655402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.655541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.656107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.656567 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.657673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-audit\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.658151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.659109 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.659456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2527e960-4f78-42fa-8204-72f3dcf0716d-audit-dir\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.660291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-config\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.660349 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39fcf916-177a-4f6c-ab49-18f1595166de-audit-dir\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.662554 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.662626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39fcf916-177a-4f6c-ab49-18f1595166de-audit-policies\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-auth-proxy-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.663579 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-config\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664051 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-images\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.664788 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.665062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-image-import-ca\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.666830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-encryption-config\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.667287 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.667631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.668822 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-machine-approver-tls\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.669381 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-encryption-config\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670001 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670115 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670258 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670434 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670603 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670690 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670762 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670846 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670915 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.670987 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671017 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671062 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671112 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671152 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.671447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673908 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-serving-cert\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673363 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673446 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673772 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.673848 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.675671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39fcf916-177a-4f6c-ab49-18f1595166de-etcd-client\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.678232 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.689827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-serving-cert\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.700441 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.726812 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.726901 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.727086 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.727985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2527e960-4f78-42fa-8204-72f3dcf0716d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.728160 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.728882 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729203 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.729890 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.730605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2527e960-4f78-42fa-8204-72f3dcf0716d-etcd-client\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.731122 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.732014 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"controller-manager-879f6c89f-s6knp\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.732072 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.734992 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.735949 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.738815 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739409 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739833 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.739995 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740452 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740610 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.740608 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741485 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjng\" (UniqueName: \"kubernetes.io/projected/2ee0fef7-2491-4b6c-9c2a-787efabdb7df-kube-api-access-zgjng\") pod \"machine-approver-56656f9798-ptxlj\" (UID: \"2ee0fef7-2491-4b6c-9c2a-787efabdb7df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741658 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.741781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.742464 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.742610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746185 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mw9hv"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746839 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.746869 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747435 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747497 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.747617 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.748242 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.748947 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.749345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.749786 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.750249 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752736 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752768 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752826 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.752992 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753060 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753098 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753250 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753337 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753357 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.753485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.756623 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.758545 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1ded37-3147-4b41-b460-63471eba80b3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.759202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.759621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.763282 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764208 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.764986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-config\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765938 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f32e760-39ac-4077-9c39-10ac5d621b15-service-ca-bundle\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.765978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.766710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.766952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.767195 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/96793fb5-3ab7-4ae4-af94-8f8d1064b036-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.769011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.769614 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772543 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f32e760-39ac-4077-9c39-10ac5d621b15-serving-cert\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772563 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-serving-cert\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.772880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.773333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtvw\" (UniqueName: \"kubernetes.io/projected/2527e960-4f78-42fa-8204-72f3dcf0716d-kube-api-access-fgtvw\") pod \"apiserver-76f77b778f-cm4bk\" (UID: \"2527e960-4f78-42fa-8204-72f3dcf0716d\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774426 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.775177 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.774491 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1ded37-3147-4b41-b460-63471eba80b3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776050 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776146 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776795 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.776988 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777159 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777358 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.777683 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.784999 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.788545 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.788875 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.789016 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.789702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.790261 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.792734 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.793446 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.793628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.794650 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.795885 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.797425 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.798337 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.800197 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.801350 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.802038 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.803240 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.803305 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.805033 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.806885 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.807320 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.808695 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.809994 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.811883 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.813196 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.814963 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.816223 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.817879 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.819334 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.821287 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.828162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.829772 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.831153 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.832508 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.832532 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.833723 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.835190 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.837042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.838845 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.841801 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.843665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csp4\" (UniqueName: \"kubernetes.io/projected/b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea-kube-api-access-6csp4\") pod \"machine-api-operator-5694c8668f-tzm67\" (UID: \"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.844239 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.845888 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.849835 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.851356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.852671 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.852860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.853957 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855595 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855731 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.855969 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856071 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856354 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.856928 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12d26c94-56da-48ee-8001-e82b50099e6b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.857672 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.859159 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.859592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/12d26c94-56da-48ee-8001-e82b50099e6b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.861118 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.862172 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.862435 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-z5fgm"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.863010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.863068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npt62\" (UniqueName: \"kubernetes.io/projected/39fcf916-177a-4f6c-ab49-18f1595166de-kube-api-access-npt62\") pod \"apiserver-7bbb656c7d-r9vcs\" (UID: \"39fcf916-177a-4f6c-ab49-18f1595166de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.864662 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.865821 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.867509 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.872120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.888974 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.897782 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933889bd-b762-4afc-9b6c-0088cc6107a5-config\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.900148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.909655 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.920187 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.926837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.928816 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.940604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933889bd-b762-4afc-9b6c-0088cc6107a5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.948555 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:55:45 crc kubenswrapper[4705]: I0216 14:55:45.984917 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.007203 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.008920 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.029139 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.049858 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.069275 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.090321 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.112738 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.129909 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.149652 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.170349 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.188679 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.207609 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.209080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.213226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"9caf92b69500d5cc6d3a32f4ddf3209698c5e4ee714ca8d888c90a4ff6454526"} Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.218220 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51cb62a1_dd06_4f6b_aa37_c824973a7df0.slice/crio-68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a WatchSource:0}: Error finding container 68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a: Status 404 returned error can't find the container with id 68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.228984 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.241717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-serving-cert\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.250560 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.269214 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.277445 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-config\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.299054 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.308513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-trusted-ca\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.310099 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.328458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.349505 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.368841 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.390658 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.408923 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.428925 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.441686 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4bk"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.447060 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs"] Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.449421 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.456493 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2527e960_4f78_42fa_8204_72f3dcf0716d.slice/crio-b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084 WatchSource:0}: Error finding container b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084: Status 404 returned error can't find the container with id b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.459134 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tzm67"] Feb 16 14:55:46 crc kubenswrapper[4705]: W0216 14:55:46.460155 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39fcf916_177a_4f6c_ab49_18f1595166de.slice/crio-f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78 WatchSource:0}: Error finding container f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78: Status 404 returned error can't find the container with id f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.490422 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.509025 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.530502 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.549564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.572011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.589165 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.609019 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.631591 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.650263 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.654848 4705 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.654938 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.154912539 +0000 UTC m=+141.339889655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.657603 4705 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.657671 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.157654064 +0000 UTC m=+141.342631170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.664601 4705 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.665485 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:47.165435968 +0000 UTC m=+141.350413044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync secret cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.675153 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.693141 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.710426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.730301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.748890 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.767630 4705 request.go:700] Waited for 1.017982825s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.769426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.790316 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.812312 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: E0216 14:55:46.821749 4705 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.830284 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.849934 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.870182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.890093 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.909578 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.929208 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.977079 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnwc\" (UniqueName: \"kubernetes.io/projected/0f32e760-39ac-4077-9c39-10ac5d621b15-kube-api-access-tvnwc\") pod \"authentication-operator-69f744f599-7clmb\" (UID: \"0f32e760-39ac-4077-9c39-10ac5d621b15\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.977455 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.991245 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd7zw\" (UniqueName: \"kubernetes.io/projected/96793fb5-3ab7-4ae4-af94-8f8d1064b036-kube-api-access-nd7zw\") pod \"cluster-samples-operator-665b6dd947-485f2\" (UID: \"96793fb5-3ab7-4ae4-af94-8f8d1064b036\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:46 crc kubenswrapper[4705]: I0216 14:55:46.992998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.015226 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl59k\" (UniqueName: \"kubernetes.io/projected/606c1ccf-c94e-417d-852a-9cf7ed18c4f7-kube-api-access-dl59k\") pod \"openshift-config-operator-7777fb866f-vd5wp\" (UID: \"606c1ccf-c94e-417d-852a-9cf7ed18c4f7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.036767 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"oauth-openshift-558db77b4-mqkpd\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.059392 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zb2r\" (UniqueName: \"kubernetes.io/projected/29292cac-8f57-4f0b-aeb5-b4b7db9b3e45-kube-api-access-9zb2r\") pod \"downloads-7954f5f757-cdb8w\" (UID: \"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45\") " pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.068487 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttrg\" (UniqueName: \"kubernetes.io/projected/6b1ded37-3147-4b41-b460-63471eba80b3-kube-api-access-4ttrg\") pod \"openshift-apiserver-operator-796bbdcf4f-7xkgj\" (UID: \"6b1ded37-3147-4b41-b460-63471eba80b3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.069243 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.094273 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.110996 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.130108 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.133328 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.140766 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.150920 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.169098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186113 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.186240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.194842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.205467 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.209098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.220264 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerStarted","Data":"579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.220336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerStarted","Data":"68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.221580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.229785 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.233034 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s6knp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.233083 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.242639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.251487 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256057 4705 generic.go:334] "Generic (PLEG): container finished" podID="2527e960-4f78-42fa-8204-72f3dcf0716d" containerID="692d3707ea33fb649d005202d2ebd913e77097d96ec86cb5a63ce6196e5259d3" exitCode=0 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerDied","Data":"692d3707ea33fb649d005202d2ebd913e77097d96ec86cb5a63ce6196e5259d3"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256601 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"b61ea762b92b5055d078e15b5c56eb01075aba104823decbc384f8e6e2e68084"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.256789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269087 4705 generic.go:334] "Generic (PLEG): container finished" podID="39fcf916-177a-4f6c-ab49-18f1595166de" containerID="e55d174441d6122d0d3a1e89d72e520f8e1080f22c9f2d5770f831356e50f7a0" exitCode=0 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerDied","Data":"e55d174441d6122d0d3a1e89d72e520f8e1080f22c9f2d5770f831356e50f7a0"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269354 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerStarted","Data":"f09cec64a6cc4b4cafa4a3632a95fe92b80cbf9a292d185caf71f810f3d4df78"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.269548 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.270459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7clmb"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"c7137b75686886a2189707982495fb3bf51fcc38d424f5ef79b265dd9a39bd8e"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"e756c4b724ddab8c019210bcae3933c09a2ea55aae60ab47239c4c0aea5f92f5"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.275539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" event={"ID":"b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea","Type":"ContainerStarted","Data":"17f3484a544b9becf17e49536a1c748b3b712622769205cfcdf2009c42454cba"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.278333 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"76a9bdf94ba068a2e488c1ecf60677bbaec9cdce6910c3989ff1891c823c35d5"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.278380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" event={"ID":"2ee0fef7-2491-4b6c-9c2a-787efabdb7df","Type":"ContainerStarted","Data":"acc214ce9ec02c559382ee8d9d0287780478587fbb57d31d1e449c09d665bec1"} Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.288932 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.310238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.321475 4705 csr.go:261] certificate signing request csr-w9snf is approved, waiting to be issued Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.328906 4705 csr.go:257] certificate signing request csr-w9snf is issued Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.334773 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.351402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.368610 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.384073 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.392163 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.409991 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.428616 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.432975 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.452660 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.463634 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod100a207c_bfcf_42aa_8233_f760df5a3888.slice/crio-fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7 WatchSource:0}: Error finding container fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7: Status 404 returned error can't find the container with id fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.469263 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.489312 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.511809 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.528951 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.551565 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.568940 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.577927 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cdb8w"] Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.585076 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29292cac_8f57_4f0b_aeb5_b4b7db9b3e45.slice/crio-7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628 WatchSource:0}: Error finding container 7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628: Status 404 returned error can't find the container with id 7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.589217 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.609791 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.630864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.649613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.672180 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.693860 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.731962 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp"] Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.736276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/933889bd-b762-4afc-9b6c-0088cc6107a5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qf6nq\" (UID: \"933889bd-b762-4afc-9b6c-0088cc6107a5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.753912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvq6\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-kube-api-access-fzvq6\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: W0216 14:55:47.760618 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606c1ccf_c94e_417d_852a_9cf7ed18c4f7.slice/crio-74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196 WatchSource:0}: Error finding container 74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196: Status 404 returned error can't find the container with id 74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196 Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.770614 4705 request.go:700] Waited for 1.913684677s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.787101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfmj\" (UniqueName: \"kubernetes.io/projected/df2ed87f-5932-49d3-b0b0-a649c9fe7e75-kube-api-access-2sfmj\") pod \"console-operator-58897d9998-sngv5\" (UID: \"df2ed87f-5932-49d3-b0b0-a649c9fe7e75\") " pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.790845 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.794782 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12d26c94-56da-48ee-8001-e82b50099e6b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jwnlf\" (UID: \"12d26c94-56da-48ee-8001-e82b50099e6b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.819095 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823533 4705 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823658 4705 projected.go:194] Error preparing data for projected volume kube-api-access-x2k46 for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd: failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: E0216 14:55:47.823836 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46 podName:a8302bc0-d3ed-4950-a728-5569d512a90c nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.323791726 +0000 UTC m=+142.508768802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2k46" (UniqueName: "kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46") pod "route-controller-manager-6576b87f9c-ksptd" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.830571 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.850732 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.880774 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.890467 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.928911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.935909 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.940304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.950358 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.961256 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.988839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.990303 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997408 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997561 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997583 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997660 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997739 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997835 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997873 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997898 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.997990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998017 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998038 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998059 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998174 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998195 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998360 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998409 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998433 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998471 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998529 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:47 crc kubenswrapper[4705]: I0216 14:55:47.998575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.000029 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.500009303 +0000 UTC m=+142.684986479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.002803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.011743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.035770 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.041033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.054670 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100795 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.100993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101015 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101033 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101053 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101153 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101231 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101248 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101272 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101296 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101328 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101403 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101460 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101493 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101546 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101694 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101734 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101758 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101885 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101914 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101954 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101970 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.101993 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102010 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102084 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102137 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102224 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102258 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102306 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102353 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102416 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102448 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102481 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102575 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102591 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102649 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102665 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102697 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102712 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102835 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102854 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.102883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.103004 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.602989987 +0000 UTC m=+142.787967063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.104648 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.105653 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-service-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.105925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-config\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.107279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.109131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-images\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e037a092-dcda-4227-9872-ea455a432ac6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.110520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.111189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-trusted-ca\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.115437 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e908b56-64e1-410b-952c-a8d5c63242e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.116115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.118677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.119114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-ca\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.126783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.128685 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.131133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.132361 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.137677 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-metrics-tls\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.137998 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e908b56-64e1-410b-952c-a8d5c63242e8-proxy-tls\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-metrics-tls\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138676 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-serving-cert\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.138714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afea24b5-a4cc-48f0-869a-f45518e48dd1-etcd-client\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.139114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e037a092-dcda-4227-9872-ea455a432ac6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.139746 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.142135 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.156120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-proxy-tls\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.163430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.173142 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.186079 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.189310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2sq\" (UniqueName: \"kubernetes.io/projected/c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c-kube-api-access-wd2sq\") pod \"openshift-controller-manager-operator-756b6f6bc6-n6lwx\" (UID: \"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205919 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205952 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.205993 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206012 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206030 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206235 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206312 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206358 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206402 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206480 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206615 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206693 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206726 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206757 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206823 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206854 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206869 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206886 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.206930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.209137 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c99403-3b09-4401-aa04-41a0ff730c68-service-ca-bundle\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.209497 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-socket-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.210932 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-csi-data-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.211096 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/226fa561-a051-4bf5-8d7b-b2d1e3871e81-config\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.214141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e989356-1c20-489c-84a5-6437a37ab683-cert\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.214734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215295 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215573 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc99828c-51d1-42ae-a28b-b0fad667f0fa-config-volume\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.215721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrbsz\" (UniqueName: \"kubernetes.io/projected/4e908b56-64e1-410b-952c-a8d5c63242e8-kube-api-access-mrbsz\") pod \"machine-config-operator-74547568cd-bmbln\" (UID: \"4e908b56-64e1-410b-952c-a8d5c63242e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fee83f9-9187-4930-80d9-8337052eb6f7-config\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216507 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-cabundle\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.216570 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-registration-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.217551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.218907 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.218935 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-srv-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219018 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-plugins-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cab18608-4788-45e5-a45a-d74482f31738-mountpoint-dir\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.219284 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.719268926 +0000 UTC m=+142.904246002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219538 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.219905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/611cca5d-97b7-4ca5-b011-5bbf06e79b58-tmpfs\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.223980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-webhook-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.225931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-metrics-certs\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-default-certificate\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226489 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4689fb61-8aab-4ec2-b20b-5f4d8753758f-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.226977 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bf0c710-9567-4ed7-8efb-a30798661adb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.227517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bd426fc6-0156-4802-b9ff-69cae6e061b6-signing-key\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.228735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc99828c-51d1-42ae-a28b-b0fad667f0fa-metrics-tls\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.229606 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-certs\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.231177 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/06c99403-3b09-4401-aa04-41a0ff730c68-stats-auth\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.232628 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7690b59-a363-4f97-aa47-a6bb9fb41d20-srv-cert\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.232993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/611cca5d-97b7-4ca5-b011-5bbf06e79b58-apiservice-cert\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.235038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b436476-c64b-40ca-a644-1067ccefcecc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.241996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzfg\" (UniqueName: \"kubernetes.io/projected/afea24b5-a4cc-48f0-869a-f45518e48dd1-kube-api-access-ttzfg\") pod \"etcd-operator-b45778765-vtlq5\" (UID: \"afea24b5-a4cc-48f0-869a-f45518e48dd1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.242993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fee83f9-9187-4930-80d9-8337052eb6f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243616 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-node-bootstrap-token\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243972 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.243982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/226fa561-a051-4bf5-8d7b-b2d1e3871e81-serving-cert\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.244223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.244397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.249409 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.265078 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"console-f9d7485db-fnrqq\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.270301 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.281824 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.285124 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.296415 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9mv\" (UniqueName: \"kubernetes.io/projected/bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7-kube-api-access-bk9mv\") pod \"machine-config-controller-84d6567774-7rwk8\" (UID: \"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.297064 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.310544 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.311049 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.81103368 +0000 UTC m=+142.996010756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.315567 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e037a092-dcda-4227-9872-ea455a432ac6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5ntj5\" (UID: \"e037a092-dcda-4227-9872-ea455a432ac6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.317314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cdb8w" event={"ID":"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45","Type":"ContainerStarted","Data":"4e36b99e9e29733d3e20c6e7feda67be482fbf84a2e3657e13acc8a6ee781e4b"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.317352 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cdb8w" event={"ID":"29292cac-8f57-4f0b-aeb5-b4b7db9b3e45","Type":"ContainerStarted","Data":"7c19c181f57a1945a23be6abf7420821c486e3c78cc6206bdfd23a35e729c628"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.318096 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.321799 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.324049 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9c4n\" (UniqueName: \"kubernetes.io/projected/f74ef58c-d59c-43a0-8c8d-b6830dfd5120-kube-api-access-k9c4n\") pod \"dns-operator-744455d44c-s5jzr\" (UID: \"f74ef58c-d59c-43a0-8c8d-b6830dfd5120\") " pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.333237 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 14:50:47 +0000 UTC, rotation deadline is 2026-12-27 17:44:31.079497439 +0000 UTC Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.333263 4705 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7538h48m42.746237058s for next certificate rotation Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.343609 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.343681 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.345790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l724\" (UniqueName: \"kubernetes.io/projected/1b830d25-6407-4aa5-bb8a-4f1789e62fe9-kube-api-access-6l724\") pod \"ingress-operator-5b745b69d9-5bmsj\" (UID: \"1b830d25-6407-4aa5-bb8a-4f1789e62fe9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.347436 4705 generic.go:334] "Generic (PLEG): container finished" podID="606c1ccf-c94e-417d-852a-9cf7ed18c4f7" containerID="4ac96bd6c779cc04d96091f3a59fa8fd73597afa72e91d30522f991e49fbd79d" exitCode=0 Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.348195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerDied","Data":"4ac96bd6c779cc04d96091f3a59fa8fd73597afa72e91d30522f991e49fbd79d"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.348230 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerStarted","Data":"74b21c6f4db6bff94d8a95b797c1be74a68e1817163225c7f0b2cec9c4404196"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.387339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerStarted","Data":"1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.387403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerStarted","Data":"fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.388243 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.408005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpfqn\" (UniqueName: \"kubernetes.io/projected/3bf0c710-9567-4ed7-8efb-a30798661adb-kube-api-access-zpfqn\") pod \"multus-admission-controller-857f4d67dd-pdvn5\" (UID: \"3bf0c710-9567-4ed7-8efb-a30798661adb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.424304 4705 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mqkpd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.424357 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmrh7\" (UniqueName: \"kubernetes.io/projected/0b436476-c64b-40ca-a644-1067ccefcecc-kube-api-access-mmrh7\") pod \"control-plane-machine-set-operator-78cbb6b69f-kqpk2\" (UID: \"0b436476-c64b-40ca-a644-1067ccefcecc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.437999 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sngv5"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.441106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.441296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.445088 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:48.945061367 +0000 UTC m=+143.130038443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.466305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"route-controller-manager-6576b87f9c-ksptd\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.483832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqth\" (UniqueName: \"kubernetes.io/projected/cab18608-4788-45e5-a45a-d74482f31738-kube-api-access-5cqth\") pod \"csi-hostpathplugin-2j46p\" (UID: \"cab18608-4788-45e5-a45a-d74482f31738\") " pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.498285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"collect-profiles-29520885-h8s9q\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.504128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntkms\" (UniqueName: \"kubernetes.io/projected/3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb-kube-api-access-ntkms\") pod \"kube-storage-version-migrator-operator-b67b599dd-9wzlt\" (UID: \"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.507543 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" event={"ID":"6b1ded37-3147-4b41-b460-63471eba80b3","Type":"ContainerStarted","Data":"8fc5ccb65ec92b21a649cfd4501f7ab1801321c49246ae0429f210b4cffc5e9c"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.507721 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" event={"ID":"6b1ded37-3147-4b41-b460-63471eba80b3","Type":"ContainerStarted","Data":"d9a12cba1f126afe8f1c77a1c17b3dbbceaebd1ec9d1bff2c60a93bfe828a599"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.501138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.510926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.513611 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" event={"ID":"39fcf916-177a-4f6c-ab49-18f1595166de","Type":"ContainerStarted","Data":"3e55ff93237fb9ad1ed5d623118e2f22f1d1f290d65f79dd684335c8e696e49a"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.519761 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.522246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7c45\" (UniqueName: \"kubernetes.io/projected/c3d45c6f-dbef-4a9d-9e21-dc929ffe140b-kube-api-access-s7c45\") pod \"package-server-manager-789f6589d5-qtmdz\" (UID: \"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.525299 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbch\" (UniqueName: \"kubernetes.io/projected/bd426fc6-0156-4802-b9ff-69cae6e061b6-kube-api-access-lhbch\") pod \"service-ca-9c57cc56f-h6x7d\" (UID: \"bd426fc6-0156-4802-b9ff-69cae6e061b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.531296 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535790 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"2edbb4497336ca91e0d098963c0e23a4c0ec3034d27d21eba6686cf7087ab6cb"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"7477d8fd11607bee41cb06bc251148e912eee98d748970fc55066ec8a4d46692"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.535847 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" event={"ID":"96793fb5-3ab7-4ae4-af94-8f8d1064b036","Type":"ContainerStarted","Data":"857ce5e8efadf7ba4914f1404203b8fedd6e3a74b5067548ff5545886615abc5"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.538485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dm6v\" (UniqueName: \"kubernetes.io/projected/226fa561-a051-4bf5-8d7b-b2d1e3871e81-kube-api-access-8dm6v\") pod \"service-ca-operator-777779d784-6fdc4\" (UID: \"226fa561-a051-4bf5-8d7b-b2d1e3871e81\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.546975 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.548217 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.048200375 +0000 UTC m=+143.233177451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.554467 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.557683 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxq24\" (UniqueName: \"kubernetes.io/projected/4689fb61-8aab-4ec2-b20b-5f4d8753758f-kube-api-access-gxq24\") pod \"catalog-operator-68c6474976-9bb6j\" (UID: \"4689fb61-8aab-4ec2-b20b-5f4d8753758f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.562413 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.563402 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.572243 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"18938ddb45824b203f68a7a7473b0de5b16a114ce9b7b1135790f07bb00bd1f3"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.572302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" event={"ID":"2527e960-4f78-42fa-8204-72f3dcf0716d","Type":"ContainerStarted","Data":"b65302a380a18c6d41d67bd6d40e7cf924aef9d0b63ab5c6080db219a603a798"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.584514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" event={"ID":"0f32e760-39ac-4077-9c39-10ac5d621b15","Type":"ContainerStarted","Data":"14b19c2a281ac5ed26da9857cfc65a9d252fa0d2901748b63be801b5d3edeaf0"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.584552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" event={"ID":"0f32e760-39ac-4077-9c39-10ac5d621b15","Type":"ContainerStarted","Data":"c76d7c3e573a4c022aaa621beedea49f5fca0bfd079547c0ae36c77e4f820645"} Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.590756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgttl\" (UniqueName: \"kubernetes.io/projected/611cca5d-97b7-4ca5-b011-5bbf06e79b58-kube-api-access-fgttl\") pod \"packageserver-d55dfcdfc-gbsfs\" (UID: \"611cca5d-97b7-4ca5-b011-5bbf06e79b58\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.591183 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.606094 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.610325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.619804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.625732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"marketplace-operator-79b997595-bbtvp\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.632214 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4gd\" (UniqueName: \"kubernetes.io/projected/ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e-kube-api-access-pm4gd\") pod \"machine-config-server-z5fgm\" (UID: \"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e\") " pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.646360 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxjb\" (UniqueName: \"kubernetes.io/projected/9e989356-1c20-489c-84a5-6437a37ab683-kube-api-access-6nxjb\") pod \"ingress-canary-jtcsx\" (UID: \"9e989356-1c20-489c-84a5-6437a37ab683\") " pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.649551 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.654749 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.154260502 +0000 UTC m=+143.339237578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.667724 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.680654 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6fee83f9-9187-4930-80d9-8337052eb6f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-88bxc\" (UID: \"6fee83f9-9187-4930-80d9-8337052eb6f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.686027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.694803 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxx2p\" (UniqueName: \"kubernetes.io/projected/f7690b59-a363-4f97-aa47-a6bb9fb41d20-kube-api-access-zxx2p\") pod \"olm-operator-6b444d44fb-nkwfz\" (UID: \"f7690b59-a363-4f97-aa47-a6bb9fb41d20\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.708407 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.709093 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.716601 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqvk\" (UniqueName: \"kubernetes.io/projected/06c99403-3b09-4401-aa04-41a0ff730c68-kube-api-access-2zqvk\") pod \"router-default-5444994796-mw9hv\" (UID: \"06c99403-3b09-4401-aa04-41a0ff730c68\") " pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.716839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.730647 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.742611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd5hv\" (UniqueName: \"kubernetes.io/projected/cc99828c-51d1-42ae-a28b-b0fad667f0fa-kube-api-access-pd5hv\") pod \"dns-default-hnkwm\" (UID: \"cc99828c-51d1-42ae-a28b-b0fad667f0fa\") " pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.746411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.757429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.758414 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.258397037 +0000 UTC m=+143.443374113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.758513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jtcsx" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.766558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqk5k\" (UniqueName: \"kubernetes.io/projected/1ac01610-0f79-4060-9820-5d2f6251a290-kube-api-access-nqk5k\") pod \"migrator-59844c95c7-xhcb8\" (UID: \"1ac01610-0f79-4060-9820-5d2f6251a290\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.773406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.799360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-z5fgm" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.860741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.861282 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.361268177 +0000 UTC m=+143.546245253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.882268 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln"] Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.938212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.947201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.961653 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.968762 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:48 crc kubenswrapper[4705]: E0216 14:55:48.969131 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.469114514 +0000 UTC m=+143.654091590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:48 crc kubenswrapper[4705]: I0216 14:55:48.976510 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.069725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.070022 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.57001091 +0000 UTC m=+143.754987986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.170920 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.171084 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.67105836 +0000 UTC m=+143.856035436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.172116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.172454 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.672443238 +0000 UTC m=+143.857420314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.201844 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" podStartSLOduration=123.201813106 podStartE2EDuration="2m3.201813106s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.193712473 +0000 UTC m=+143.378689549" watchObservedRunningTime="2026-02-16 14:55:49.201813106 +0000 UTC m=+143.386790172" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.214193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.244633 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ptxlj" podStartSLOduration=123.244617903 podStartE2EDuration="2m3.244617903s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.243717898 +0000 UTC m=+143.428694974" watchObservedRunningTime="2026-02-16 14:55:49.244617903 +0000 UTC m=+143.429594979" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.273234 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.273744 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.773727344 +0000 UTC m=+143.958704420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.374672 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.380706 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.880688136 +0000 UTC m=+144.065665212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.478908 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-485f2" podStartSLOduration=123.478887288 podStartE2EDuration="2m3.478887288s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.421701414 +0000 UTC m=+143.606678500" watchObservedRunningTime="2026-02-16 14:55:49.478887288 +0000 UTC m=+143.663864364" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.479533 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.480008 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:49.979982448 +0000 UTC m=+144.164959524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.480748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pdvn5"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.588709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.589230 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.089212343 +0000 UTC m=+144.274189419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.652249 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" event={"ID":"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c","Type":"ContainerStarted","Data":"67e03a3d7063a32b2ea64590872588044a07583375801f7f696e755e10ce4153"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.669728 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.691617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.691979 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.191961929 +0000 UTC m=+144.376939005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.693230 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sngv5" event={"ID":"df2ed87f-5932-49d3-b0b0-a649c9fe7e75","Type":"ContainerStarted","Data":"522ebcb4a57dd4f489d85a2dc36dad0f463a6362a88da850664f8a0bd42e14e3"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sngv5" event={"ID":"df2ed87f-5932-49d3-b0b0-a649c9fe7e75","Type":"ContainerStarted","Data":"ecd76be3a98cfb3a9db239615ef1f4c79c3baafd6f9564eee32176529547b45d"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.699942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.719562 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cdb8w" podStartSLOduration=122.719546898 podStartE2EDuration="2m2.719546898s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.719153167 +0000 UTC m=+143.904130253" watchObservedRunningTime="2026-02-16 14:55:49.719546898 +0000 UTC m=+143.904523974" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.730624 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-sngv5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.730673 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podUID="df2ed87f-5932-49d3-b0b0-a649c9fe7e75" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.772666 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" event={"ID":"606c1ccf-c94e-417d-852a-9cf7ed18c4f7","Type":"ContainerStarted","Data":"d7100a71796228955a441849456e864611b865b9d54e7079be03574e7b402556"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.772720 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.784194 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc326d4_0e31_4506_a9fd_e8f7c19f1e8e.slice/crio-56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384 WatchSource:0}: Error finding container 56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384: Status 404 returned error can't find the container with id 56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.796194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.796633 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.296617699 +0000 UTC m=+144.481594775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.817038 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"7a35592451f18dea3810831f57181f1e93e0845edf7b517bd33d807aab628aa1"} Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.818113 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc25ae00_316a_4dfb_8a83_72fe2318da5e.slice/crio-253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5 WatchSource:0}: Error finding container 253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5: Status 404 returned error can't find the container with id 253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.819206 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podStartSLOduration=122.81919358 podStartE2EDuration="2m2.81919358s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:49.786753327 +0000 UTC m=+143.971730403" watchObservedRunningTime="2026-02-16 14:55:49.81919358 +0000 UTC m=+144.004170656" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.834118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"0c81ce5511daaf2f7b984dfefe315f6d23cb6598666007da5b4c9d6130593e3f"} Feb 16 14:55:49 crc kubenswrapper[4705]: W0216 14:55:49.837324 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee710a8b_3390_4749_949f_e8efa983b1ae.slice/crio-7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2 WatchSource:0}: Error finding container 7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2: Status 404 returned error can't find the container with id 7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2 Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.846330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" event={"ID":"12d26c94-56da-48ee-8001-e82b50099e6b","Type":"ContainerStarted","Data":"f4d0393a3c846d28f6fd0853519acca5ded45b1e0fcdd2b99c7996680403812f"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.846384 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" event={"ID":"12d26c94-56da-48ee-8001-e82b50099e6b","Type":"ContainerStarted","Data":"39b390a0856fdfa35cd42c1f948ad40325dc2aaa31fcfb1aeec8cdbf1a1ed362"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.904954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:49 crc kubenswrapper[4705]: E0216 14:55:49.906465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.4064498 +0000 UTC m=+144.591426876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.925564 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vtlq5"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.933533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" event={"ID":"933889bd-b762-4afc-9b6c-0088cc6107a5","Type":"ContainerStarted","Data":"203e745e40643fa3477cbb0a1e0a6cbb60bd0bd73eb6703c51bb3c308455d4e5"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.933572 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" event={"ID":"933889bd-b762-4afc-9b6c-0088cc6107a5","Type":"ContainerStarted","Data":"6b5c8a342357bf8d1bb6e69a0ccd024b1e5f8ca04185bdca3ea4bf8525432de0"} Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.950895 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.950946 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.977066 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj"] Feb 16 14:55:49 crc kubenswrapper[4705]: I0216 14:55:49.984973 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.004170 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podStartSLOduration=124.004155958 podStartE2EDuration="2m4.004155958s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.000728594 +0000 UTC m=+144.185705680" watchObservedRunningTime="2026-02-16 14:55:50.004155958 +0000 UTC m=+144.189133034" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.007114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.011027 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.511012567 +0000 UTC m=+144.695989643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.122645 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7clmb" podStartSLOduration=124.122618887 podStartE2EDuration="2m4.122618887s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.029401943 +0000 UTC m=+144.214379019" watchObservedRunningTime="2026-02-16 14:55:50.122618887 +0000 UTC m=+144.307595963" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.126221 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.126535 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.626519374 +0000 UTC m=+144.811496450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.139708 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-s5jzr"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.149442 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.227730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.228652 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.728638444 +0000 UTC m=+144.913615520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.320628 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7xkgj" podStartSLOduration=124.320597414 podStartE2EDuration="2m4.320597414s" podCreationTimestamp="2026-02-16 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.296790169 +0000 UTC m=+144.481767255" watchObservedRunningTime="2026-02-16 14:55:50.320597414 +0000 UTC m=+144.505574490" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.330105 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.330524 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.830506836 +0000 UTC m=+145.015483912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.339066 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tzm67" podStartSLOduration=123.339046201 podStartE2EDuration="2m3.339046201s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.337019265 +0000 UTC m=+144.521996331" watchObservedRunningTime="2026-02-16 14:55:50.339046201 +0000 UTC m=+144.524023277" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.402097 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" podStartSLOduration=123.402069775 podStartE2EDuration="2m3.402069775s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.401596912 +0000 UTC m=+144.586573988" watchObservedRunningTime="2026-02-16 14:55:50.402069775 +0000 UTC m=+144.587046851" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.431456 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.431845 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:50.931829634 +0000 UTC m=+145.116806710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.535898 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.536452 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.036435142 +0000 UTC m=+145.221412218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.536670 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.537122 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.03711458 +0000 UTC m=+145.222091656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.638876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.639804 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.139788555 +0000 UTC m=+145.324765631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.694074 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" podStartSLOduration=123.694048048 podStartE2EDuration="2m3.694048048s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.690636474 +0000 UTC m=+144.875613550" watchObservedRunningTime="2026-02-16 14:55:50.694048048 +0000 UTC m=+144.879025114" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.716483 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.742242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.742592 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.242578543 +0000 UTC m=+145.427555619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.747070 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j46p"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.809474 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podStartSLOduration=123.809448482 podStartE2EDuration="2m3.809448482s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.747417446 +0000 UTC m=+144.932394532" watchObservedRunningTime="2026-02-16 14:55:50.809448482 +0000 UTC m=+144.994425558" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.838762 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qf6nq" podStartSLOduration=123.838735238 podStartE2EDuration="2m3.838735238s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.807687384 +0000 UTC m=+144.992664460" watchObservedRunningTime="2026-02-16 14:55:50.838735238 +0000 UTC m=+145.023712304" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.847872 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.848318 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.34827783 +0000 UTC m=+145.533254906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.848741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.849243 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.349230017 +0000 UTC m=+145.534207093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.902901 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.903940 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.912355 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jwnlf" podStartSLOduration=123.912330013 podStartE2EDuration="2m3.912330013s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:50.898207844 +0000 UTC m=+145.083184920" watchObservedRunningTime="2026-02-16 14:55:50.912330013 +0000 UTC m=+145.097307089" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.921490 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.921821 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.935986 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.949981 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:50 crc kubenswrapper[4705]: E0216 14:55:50.950476 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.450459802 +0000 UTC m=+145.635436878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.954263 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs"] Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.989438 4705 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cm4bk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]log ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]etcd ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/max-in-flight-filter ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 14:55:50 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 14:55:50 crc kubenswrapper[4705]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-startinformers ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 14:55:50 crc kubenswrapper[4705]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 14:55:50 crc kubenswrapper[4705]: livez check failed Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.989495 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" podUID="2527e960-4f78-42fa-8204-72f3dcf0716d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.996030 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"ae4b6a1b321339206664619a99696dfddec250fbc2f5ecdec70184a6653461c7"} Feb 16 14:55:50 crc kubenswrapper[4705]: I0216 14:55:50.997454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" event={"ID":"c2ad5caa-f1f1-4e90-9e0c-bb3a24af638c","Type":"ContainerStarted","Data":"4a68694d9205bdfb87def20290951dac09f439c75f7a789b6d5f85b4fc1f55b1"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.034119 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jtcsx"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.038382 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"1f21d9effdf9705ea08bc41127e5cc733c4e91ff79d4185095b243078fd2de65"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.051148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.053027 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.553002223 +0000 UTC m=+145.737979289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.058440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mw9hv" event={"ID":"06c99403-3b09-4401-aa04-41a0ff730c68","Type":"ContainerStarted","Data":"a402c8597dbe05a1e88b62719874fc53d124ee20e23ff3bf26e132efc606488f"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.070301 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z5fgm" event={"ID":"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e","Type":"ContainerStarted","Data":"94d504236ea23083b3f8b6e4e3a7463619ca7b3d1b1cda1464504b78551c2536"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.070355 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-z5fgm" event={"ID":"ddc326d4-0e31-4506-a9fd-e8f7c19f1e8e","Type":"ContainerStarted","Data":"56d9a0c212518b18593f9fe3e1324ac7d60f53e3fdedb947e3f2d7524f1b2384"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.082446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"9bfd7aa05de5195b870054a9be0207c2efeaea82c9337684f6869c68482e5883"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.084412 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n6lwx" podStartSLOduration=124.084397996 podStartE2EDuration="2m4.084397996s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.049472266 +0000 UTC m=+145.234449352" watchObservedRunningTime="2026-02-16 14:55:51.084397996 +0000 UTC m=+145.269375072" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.095157 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h6x7d"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.097456 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-z5fgm" podStartSLOduration=6.097439965 podStartE2EDuration="6.097439965s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.09288262 +0000 UTC m=+145.277859716" watchObservedRunningTime="2026-02-16 14:55:51.097439965 +0000 UTC m=+145.282417041" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.099219 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.112758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerStarted","Data":"253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.149880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"c8214de32fdc1fc886409072431943019d130477219756c277a160eecedfb7f4"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.151826 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.153752 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.653733214 +0000 UTC m=+145.838710290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.155071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"3c4849cb214c7aa28bc73c13495530f85b602e022011781000b33ac7d07225ba"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.171600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" event={"ID":"afea24b5-a4cc-48f0-869a-f45518e48dd1","Type":"ContainerStarted","Data":"8d46c962c1d4c6c0e783070ab8b6586f1a2d8ec5957bf6fd2fe4928fa619c32f"} Feb 16 14:55:51 crc kubenswrapper[4705]: W0216 14:55:51.194783 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4689fb61_8aab_4ec2_b20b_5f4d8753758f.slice/crio-8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5 WatchSource:0}: Error finding container 8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5: Status 404 returned error can't find the container with id 8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5 Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.197210 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerStarted","Data":"7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.200974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"aeded7686c16a893794a53b0a863cccf844479315d2809b50870eb0997572f6d"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.214524 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.222462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"5274a293877d22f88a5c94d288c9fa460fc4bd8cf8f1896c3b6c419eafa2460b"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.229133 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.240353 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerStarted","Data":"a885e38805c34d5c1e7c89b9f1f29de1c4b5e2713a9a9b37541794c592748f30"} Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.241348 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.254242 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r9vcs" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.256086 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.260092 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.760079949 +0000 UTC m=+145.945057025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266054 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podStartSLOduration=124.266039973 podStartE2EDuration="2m4.266039973s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:51.259588616 +0000 UTC m=+145.444565682" watchObservedRunningTime="2026-02-16 14:55:51.266039973 +0000 UTC m=+145.451017049" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266525 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266740 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.266861 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.277984 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vd5wp" Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.344668 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hnkwm"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.357631 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.357973 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.857950502 +0000 UTC m=+146.042927578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.358868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.360209 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:55:51 crc kubenswrapper[4705]: W0216 14:55:51.379904 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b436476_c64b_40ca_a644_1067ccefcecc.slice/crio-b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204 WatchSource:0}: Error finding container b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204: Status 404 returned error can't find the container with id b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204 Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.437678 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.464112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.464744 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:51.96472922 +0000 UTC m=+146.149706296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.491339 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.550741 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8"] Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.564924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.565303 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.065285436 +0000 UTC m=+146.250262512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.670085 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.670487 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.17047405 +0000 UTC m=+146.355451126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.771001 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.773849 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.273804282 +0000 UTC m=+146.458781358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.873135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.873524 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.373512226 +0000 UTC m=+146.558489302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.974070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.974223 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.474196485 +0000 UTC m=+146.659173561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:51 crc kubenswrapper[4705]: I0216 14:55:51.974734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:51 crc kubenswrapper[4705]: E0216 14:55:51.975055 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.475042149 +0000 UTC m=+146.660019225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.078102 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.079930 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.579892343 +0000 UTC m=+146.764869429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.185451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.186393 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.686358532 +0000 UTC m=+146.871335608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.243750 4705 patch_prober.go:28] interesting pod/console-operator-58897d9998-sngv5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.243807 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sngv5" podUID="df2ed87f-5932-49d3-b0b0-a649c9fe7e75" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.254456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" event={"ID":"bd426fc6-0156-4802-b9ff-69cae6e061b6","Type":"ContainerStarted","Data":"68afa5d37dcb0bd560ce9e68615d6d01e3af5eb2e1b9934ea3eee2ad11045301"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.254515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" event={"ID":"bd426fc6-0156-4802-b9ff-69cae6e061b6","Type":"ContainerStarted","Data":"f5e1241d8cceceaa4cf7955c96694655910c3aac804d9458d7eed5e2d8f7c7a9"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.258982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" event={"ID":"f7690b59-a363-4f97-aa47-a6bb9fb41d20","Type":"ContainerStarted","Data":"91ddc1445c6f2b7d384e8ff92f62eb2c2288e3b57d3b21afb21f402fdbc7991a"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.273183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" event={"ID":"4689fb61-8aab-4ec2-b20b-5f4d8753758f","Type":"ContainerStarted","Data":"bc577dca0a9f4a27bee132034cc2355f85c9aeb8ba0246369a6b355614b69e1b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.273225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" event={"ID":"4689fb61-8aab-4ec2-b20b-5f4d8753758f","Type":"ContainerStarted","Data":"8bf57da0b9dcf848a1f6ebf5cc32bf8dda725d631f7f37df85d1fd3c23a82bc5"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.274189 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.289115 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.290537 4705 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9bb6j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.290605 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" podUID="4689fb61-8aab-4ec2-b20b-5f4d8753758f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.291511 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.791462154 +0000 UTC m=+146.976439230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.292146 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h6x7d" podStartSLOduration=125.292133482 podStartE2EDuration="2m5.292133482s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.291622148 +0000 UTC m=+146.476599224" watchObservedRunningTime="2026-02-16 14:55:52.292133482 +0000 UTC m=+146.477110558" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.304898 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"a5b890aba6e5606e2ebb8bcd914ccb26f505b11671e29866da7c47d2811f1b6c"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.304941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" event={"ID":"1b830d25-6407-4aa5-bb8a-4f1789e62fe9","Type":"ContainerStarted","Data":"8421e39a1706977d4233d2a28550032e48de135d17fe66d0a57c022891f85f71"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.329926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerStarted","Data":"5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.340083 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" podStartSLOduration=125.340060141 podStartE2EDuration="2m5.340060141s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.311809703 +0000 UTC m=+146.496786789" watchObservedRunningTime="2026-02-16 14:55:52.340060141 +0000 UTC m=+146.525037207" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.347812 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"60dc9e6d2b0dd282a9c70edcd58ad154c93253f67a5badffeab59c9efb60ba32"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.347864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" event={"ID":"c3d45c6f-dbef-4a9d-9e21-dc929ffe140b","Type":"ContainerStarted","Data":"a790d572800fa52770fbc18aa7470a67020926ea0ca283dfe170ee99162ad461"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.348447 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.356134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" event={"ID":"6fee83f9-9187-4930-80d9-8337052eb6f7","Type":"ContainerStarted","Data":"7109ec3cdda2df4766176fee66745b5558e04a716efaf4c8fd9fbea6d72add9b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.367346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"dfd833896e0c65af112ccb79ef8a2148496798f5064351c4c9a8d3381b88f470"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.372693 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"953a735848373500e54d58776d2aab9c02101767e845e22ab939680eb1206ed7"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.372755 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"8caa093e711405ff1e9e52c68f7fcb4e7f9b360b2ca26865fd633cebd5c52ebd"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.385986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" event={"ID":"611cca5d-97b7-4ca5-b011-5bbf06e79b58","Type":"ContainerStarted","Data":"2a59e0a73356d875731e1cb70771b07653505ff59d10ba796560f24f5cf8e232"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.386022 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" event={"ID":"611cca5d-97b7-4ca5-b011-5bbf06e79b58","Type":"ContainerStarted","Data":"3a0b3f9c6befc99daa9a4323fb104c136719bd382793f83c5f7e9826159c1080"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.387724 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.390419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.392269 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.892249386 +0000 UTC m=+147.077226462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.393262 4705 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gbsfs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.393341 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" podUID="611cca5d-97b7-4ca5-b011-5bbf06e79b58" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.401328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mw9hv" event={"ID":"06c99403-3b09-4401-aa04-41a0ff730c68","Type":"ContainerStarted","Data":"d1d0cbf507463f137a43d9d446d862f598f014f6cadb38e6090bb89daa04367f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.409638 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" podStartSLOduration=125.409610014 podStartE2EDuration="2m5.409610014s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.408298898 +0000 UTC m=+146.593275974" watchObservedRunningTime="2026-02-16 14:55:52.409610014 +0000 UTC m=+146.594587090" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.415955 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5bmsj" podStartSLOduration=125.415924818 podStartE2EDuration="2m5.415924818s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.341029767 +0000 UTC m=+146.526006843" watchObservedRunningTime="2026-02-16 14:55:52.415924818 +0000 UTC m=+146.600901884" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.430905 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" podStartSLOduration=125.430890869 podStartE2EDuration="2m5.430890869s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.429292705 +0000 UTC m=+146.614269791" watchObservedRunningTime="2026-02-16 14:55:52.430890869 +0000 UTC m=+146.615867945" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.438819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"d90ba896f91c00043fde2edf9950b8bd05b49dfe70d22b016de544c96298487f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.438853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" event={"ID":"3bf0c710-9567-4ed7-8efb-a30798661adb","Type":"ContainerStarted","Data":"bbc88e5f4ba26b884a7dd6bc577ff0c062e6d2bfc8bbed6904f6d880dcc0c28f"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.453157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"3ac9e1eeee88573d4bc5b847fdb96c2b64aa0a38788502289697f086621b357b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.471827 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" event={"ID":"bdbbc70e-00dc-4c80-a6e7-7a4b10455cb7","Type":"ContainerStarted","Data":"9022cd9c4285e25067254f40f80afe5ebe0ce66f82e73c29e4ccfb7b08563c71"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.482638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" event={"ID":"0b436476-c64b-40ca-a644-1067ccefcecc","Type":"ContainerStarted","Data":"76b2e089be137083b0a361614d3a7524dfbd1bd739ee4f9ff6905cfa40bf6639"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.482696 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" event={"ID":"0b436476-c64b-40ca-a644-1067ccefcecc","Type":"ContainerStarted","Data":"b16197b197c82ee22391a0cb4178278687c0d7cef983e402abef0d0917a0a204"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.488840 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" event={"ID":"226fa561-a051-4bf5-8d7b-b2d1e3871e81","Type":"ContainerStarted","Data":"229d2776fb45a00d1d94b4f3c0366d2b69e2686c5aff92a2fac875499d9bc3ff"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.488912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" event={"ID":"226fa561-a051-4bf5-8d7b-b2d1e3871e81","Type":"ContainerStarted","Data":"728be1cb1a52cc6c12a82eb0e16b5155e6e0db2af291d72a1f637d8e8dec1999"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.489841 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" podStartSLOduration=125.48981489 podStartE2EDuration="2m5.48981489s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.486810158 +0000 UTC m=+146.671787234" watchObservedRunningTime="2026-02-16 14:55:52.48981489 +0000 UTC m=+146.674791966" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.492292 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.493249 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:52.993226364 +0000 UTC m=+147.178203440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.496344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" event={"ID":"afea24b5-a4cc-48f0-869a-f45518e48dd1","Type":"ContainerStarted","Data":"0e73707b81a97013e5668c5ddf5692903e9ee83472977de0b73d3b3d64ef2b7b"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.501796 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mw9hv" podStartSLOduration=125.501764359 podStartE2EDuration="2m5.501764359s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.458465308 +0000 UTC m=+146.643442394" watchObservedRunningTime="2026-02-16 14:55:52.501764359 +0000 UTC m=+146.686741435" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.520081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerStarted","Data":"b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.522527 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.522591 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.545285 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerStarted","Data":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.573592 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7rwk8" podStartSLOduration=125.573569835 podStartE2EDuration="2m5.573569835s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.522051127 +0000 UTC m=+146.707028203" watchObservedRunningTime="2026-02-16 14:55:52.573569835 +0000 UTC m=+146.758546911" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.575186 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kqpk2" podStartSLOduration=125.575175039 podStartE2EDuration="2m5.575175039s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.570797248 +0000 UTC m=+146.755774324" watchObservedRunningTime="2026-02-16 14:55:52.575175039 +0000 UTC m=+146.760152125" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.595274 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.597060 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.09703043 +0000 UTC m=+147.282007506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.608584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" event={"ID":"4e908b56-64e1-410b-952c-a8d5c63242e8","Type":"ContainerStarted","Data":"26ea7178ec1095bdf653a1082865f9a022028975e62b29f28fd650012b718ed4"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.632591 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pdvn5" podStartSLOduration=125.632568758 podStartE2EDuration="2m5.632568758s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.631780356 +0000 UTC m=+146.816757432" watchObservedRunningTime="2026-02-16 14:55:52.632568758 +0000 UTC m=+146.817545834" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.638565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerStarted","Data":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.638887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerStarted","Data":"faa1e5018382734db35e1205c39088b34faea391ec6e62672b88da102016cb47"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.639930 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.644582 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.645208 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.682015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" event={"ID":"e037a092-dcda-4227-9872-ea455a432ac6","Type":"ContainerStarted","Data":"40ae97dfb3ba0189218e688201750300667570f00b350f3cd03ceb79e94ebbbe"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.696605 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fnrqq" podStartSLOduration=125.696580719 podStartE2EDuration="2m5.696580719s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.695653123 +0000 UTC m=+146.880630199" watchObservedRunningTime="2026-02-16 14:55:52.696580719 +0000 UTC m=+146.881557795" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.697345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.697910 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.197885845 +0000 UTC m=+147.382862931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.714864 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6fdc4" podStartSLOduration=125.714846621 podStartE2EDuration="2m5.714846621s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.714581944 +0000 UTC m=+146.899559040" watchObservedRunningTime="2026-02-16 14:55:52.714846621 +0000 UTC m=+146.899823697" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.722917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jtcsx" event={"ID":"9e989356-1c20-489c-84a5-6437a37ab683","Type":"ContainerStarted","Data":"068c9f5bc65c9dc38f876816acca8d0b581141d4078e5147e17c02671e2c25dc"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.722979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jtcsx" event={"ID":"9e989356-1c20-489c-84a5-6437a37ab683","Type":"ContainerStarted","Data":"599b4e599ca702b8767cf724c2dfd379e411b6b282b955996993be00e690fcab"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.740886 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" event={"ID":"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb","Type":"ContainerStarted","Data":"73d189304e4587d687987b13aecf131755858c9d304d1093f7f8be639c20b8ad"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.740926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" event={"ID":"3cc87ed7-a3ad-41e3-bf19-5a9c8c1ebafb","Type":"ContainerStarted","Data":"9dca879c5ddabac706c5cf1e05914c586f6c8dafa9be326aa157fb3259f02090"} Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.757686 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bmbln" podStartSLOduration=125.757665649 podStartE2EDuration="2m5.757665649s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.754982776 +0000 UTC m=+146.939959862" watchObservedRunningTime="2026-02-16 14:55:52.757665649 +0000 UTC m=+146.942642745" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.759053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sngv5" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.785080 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vtlq5" podStartSLOduration=125.785052973 podStartE2EDuration="2m5.785052973s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.784470197 +0000 UTC m=+146.969447283" watchObservedRunningTime="2026-02-16 14:55:52.785052973 +0000 UTC m=+146.970030049" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.810446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.812291 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.312275442 +0000 UTC m=+147.497252518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.914993 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" podStartSLOduration=125.914959226 podStartE2EDuration="2m5.914959226s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.914491483 +0000 UTC m=+147.099468569" watchObservedRunningTime="2026-02-16 14:55:52.914959226 +0000 UTC m=+147.099936302" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.918178 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podStartSLOduration=125.918164174 podStartE2EDuration="2m5.918164174s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.885699901 +0000 UTC m=+147.070676967" watchObservedRunningTime="2026-02-16 14:55:52.918164174 +0000 UTC m=+147.103141250" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.915402 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.915465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.415448799 +0000 UTC m=+147.600425875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.922546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:52 crc kubenswrapper[4705]: E0216 14:55:52.923425 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.423412218 +0000 UTC m=+147.608389294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.944478 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.964508 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:52 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:52 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:52 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:52 crc kubenswrapper[4705]: I0216 14:55:52.964641 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.000442 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jtcsx" podStartSLOduration=8.000420107 podStartE2EDuration="8.000420107s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.000195401 +0000 UTC m=+147.185172497" watchObservedRunningTime="2026-02-16 14:55:53.000420107 +0000 UTC m=+147.185397183" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.000968 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9wzlt" podStartSLOduration=126.000962862 podStartE2EDuration="2m6.000962862s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:52.942609346 +0000 UTC m=+147.127586422" watchObservedRunningTime="2026-02-16 14:55:53.000962862 +0000 UTC m=+147.185939938" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.024198 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.024736 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.524707905 +0000 UTC m=+147.709684981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.126182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.126739 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.626722842 +0000 UTC m=+147.811699918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.227837 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.228051 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.728013248 +0000 UTC m=+147.912990324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.228300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.228820 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.72879647 +0000 UTC m=+147.913773546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.329222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.329554 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.829476729 +0000 UTC m=+148.014453805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.329674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.330183 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.830171939 +0000 UTC m=+148.015149005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.430960 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.431377 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:53.931343642 +0000 UTC m=+148.116320718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.533355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.533877 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.033847402 +0000 UTC m=+148.218824468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.635853 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.636325 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.13629723 +0000 UTC m=+148.321274316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.738271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.738718 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.238702087 +0000 UTC m=+148.423679163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.746826 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" event={"ID":"6fee83f9-9187-4930-80d9-8337052eb6f7","Type":"ContainerStarted","Data":"a4360073b0475d8b8ad089b106c09ad0abbaa1c4d93dee9146c09db962b62639"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.749165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" event={"ID":"f7690b59-a363-4f97-aa47-a6bb9fb41d20","Type":"ContainerStarted","Data":"94202f2bbfbb715a6c179c18b44dbd24324b818e885cde9af3f6ac6f8e340b94"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.749829 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.753396 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" event={"ID":"f74ef58c-d59c-43a0-8c8d-b6830dfd5120","Type":"ContainerStarted","Data":"b86eea4dbe7a3fecfe9d2221570c2a653671135d02a1b32158ff51c4a7908d92"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"0149b75f4a45e69e00a8332c35a3085f0c034ead6c166884e9f5ba125282c9fa"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hnkwm" event={"ID":"cc99828c-51d1-42ae-a28b-b0fad667f0fa","Type":"ContainerStarted","Data":"de2296abb94a076fdd50c9815476f641f0fb7c3d1cd2e065f30eab0914dd7599"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.755895 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.760600 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" event={"ID":"1ac01610-0f79-4060-9820-5d2f6251a290","Type":"ContainerStarted","Data":"13ecb7ecb85a8d5af3e60edfaba13d180fea38c883188f0cf8a4b6e1f1af6b93"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.763086 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5ntj5" event={"ID":"e037a092-dcda-4227-9872-ea455a432ac6","Type":"ContainerStarted","Data":"6db88e5100c7618d96a9b82131c65acba7b2b387f459ca837994f4a5f99468b4"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.764896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"92fa4d1fa19e2a9e53deac5f0674644e1fad929a54cbc4f8a3e6ae2b69d0f768"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.766688 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerID="5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97" exitCode=0 Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.767647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerDied","Data":"5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97"} Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.770486 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.770552 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.777544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.777606 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bb6j" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.789356 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.825250 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-88bxc" podStartSLOduration=126.825188327 podStartE2EDuration="2m6.825188327s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.814218715 +0000 UTC m=+147.999195791" watchObservedRunningTime="2026-02-16 14:55:53.825188327 +0000 UTC m=+148.010165403" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.844730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.846321 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.346293977 +0000 UTC m=+148.531271053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.846760 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkwfz" podStartSLOduration=126.846726239 podStartE2EDuration="2m6.846726239s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.843992184 +0000 UTC m=+148.028969260" watchObservedRunningTime="2026-02-16 14:55:53.846726239 +0000 UTC m=+148.031703315" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.910148 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xhcb8" podStartSLOduration=126.910114373 podStartE2EDuration="2m6.910114373s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.893040884 +0000 UTC m=+148.078017960" watchObservedRunningTime="2026-02-16 14:55:53.910114373 +0000 UTC m=+148.095091449" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.947106 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:53 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:53 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:53 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.947183 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.948013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:53 crc kubenswrapper[4705]: E0216 14:55:53.948485 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.448459278 +0000 UTC m=+148.633436344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:53 crc kubenswrapper[4705]: I0216 14:55:53.986956 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hnkwm" podStartSLOduration=8.986938127 podStartE2EDuration="8.986938127s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:53.985930069 +0000 UTC m=+148.170907145" watchObservedRunningTime="2026-02-16 14:55:53.986938127 +0000 UTC m=+148.171915193" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.049631 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.050101 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.550081824 +0000 UTC m=+148.735058900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.052143 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-s5jzr" podStartSLOduration=127.05211173 podStartE2EDuration="2m7.05211173s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:54.049054906 +0000 UTC m=+148.234031972" watchObservedRunningTime="2026-02-16 14:55:54.05211173 +0000 UTC m=+148.237088806" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.153448 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.653424467 +0000 UTC m=+148.838401543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.153661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.254609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.254948 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.754916739 +0000 UTC m=+148.939893815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.255139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.255538 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.755529926 +0000 UTC m=+148.940507002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.346788 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gbsfs" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.356070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.356240 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.856210376 +0000 UTC m=+149.041187452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.356426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.356788 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.856780241 +0000 UTC m=+149.041757317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458085 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.458304 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.958255493 +0000 UTC m=+149.143232569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458656 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.458828 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.459774 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.460125 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:54.960108894 +0000 UTC m=+149.145085980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4msnt" (UID: "347b9dab-29d3-4126-994e-6501af72985a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.466403 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.481832 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.482950 4705 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.486655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.542322 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.561413 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: E0216 14:55:54.561948 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:55.061924735 +0000 UTC m=+149.246901811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.621978 4705 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T14:55:54.482999294Z","Handler":null,"Name":""} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.643212 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.656407 4705 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.656459 4705 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.658884 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.664158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.669625 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.669670 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.790531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"ca3659f781365b1b516d4f96015cc54b433e0791327ad0caee81b11538094e88"} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.790571 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"ef139cdc6ba869521bc492e1a340603d4a5caa0d83f366580d8489b61027bf44"} Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.792432 4705 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bbtvp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.792473 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.815791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4msnt\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.848034 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.886921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.944028 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.976121 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:54 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:54 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:54 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:54 crc kubenswrapper[4705]: I0216 14:55:54.976240 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.163164 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.180270 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.182553 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.184865 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.228530 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:55 crc kubenswrapper[4705]: W0216 14:55:55.278000 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e WatchSource:0}: Error finding container 8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e: Status 404 returned error can't find the container with id 8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.284610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302828 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.302949 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341478 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: E0216 14:55:55.341802 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.341933 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" containerName="collect-profiles" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.343475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.349048 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.352581 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404702 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") pod \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\" (UID: \"fc25ae00-316a-4dfb-8a83-72fe2318da5e\") " Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.404984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.405015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.405051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.406320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.406949 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.407234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.409449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.413028 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s" (OuterVolumeSpecName: "kube-api-access-xgh9s") pod "fc25ae00-316a-4dfb-8a83-72fe2318da5e" (UID: "fc25ae00-316a-4dfb-8a83-72fe2318da5e"). InnerVolumeSpecName "kube-api-access-xgh9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.424710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"community-operators-wvxpr\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506444 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25ae00-316a-4dfb-8a83-72fe2318da5e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506464 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25ae00-316a-4dfb-8a83-72fe2318da5e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.506475 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgh9s\" (UniqueName: \"kubernetes.io/projected/fc25ae00-316a-4dfb-8a83-72fe2318da5e-kube-api-access-xgh9s\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.534888 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.535787 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.541855 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.554670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.608250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.609278 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.609424 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.610037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.610351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.627435 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"certified-operators-sj9bt\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.660188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710580 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710636 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.710852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.744103 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.745356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.762792 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.765092 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" event={"ID":"fc25ae00-316a-4dfb-8a83-72fe2318da5e","Type":"ContainerDied","Data":"253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799674 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253e99d912eda9b50729d3d97f6f2413bf4ef41819d6f8568fe0d6b7421307b5" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.799723 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.806918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" event={"ID":"cab18608-4788-45e5-a45a-d74482f31738","Type":"ContainerStarted","Data":"2b803ff58466a770443c56d15dd4b3d36062da2a65c18df51765757a11c9bf30"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813303 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.813446 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.814496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.826953 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c2eb3901f0eda90e31a80d4a31d6c7490f6649027ed22a1423737b1c2301844"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c7527bb1853886ed6ebb90e7c916e30c9eaf1b37102eb78025d1dfa09c6d6b79"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.827771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.829515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1dd0f68350423aa36bdc537a9fee235331107f7207ea48f52de7bde18793f670"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.829552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8bea621ce270730c0ae7cc2e6d9c2d3e35e52df6cf2e0f949cc64c06cd98135e"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.831186 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d151a78ee3941775c0d63021654ae12ecbd51b8105e7a3c9d9380800b9e006c2"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.831208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e93b0f4b5c2f820c15512e9d816331baadb5e71875e205a0ff44977d644e909"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.833300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"3d2f0059d40b4313cb2192bb0c8318a3e59e5de2da0badc178590ca35c5bf347"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848314 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"community-operators-ngfnt\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerStarted","Data":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848424 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerStarted","Data":"a85e7e62d04fb828a3650bdfb354f55b8cca777243fccbeb90166d171d6b20fc"} Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.848512 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2j46p" podStartSLOduration=10.84849355 podStartE2EDuration="10.84849355s" podCreationTimestamp="2026-02-16 14:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:55.844633563 +0000 UTC m=+150.029610629" watchObservedRunningTime="2026-02-16 14:55:55.84849355 +0000 UTC m=+150.033470626" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.849125 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.859532 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.914461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.936170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.942618 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:55 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:55 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:55 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.942674 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.946698 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cm4bk" Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.946929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:55:55 crc kubenswrapper[4705]: I0216 14:55:55.959799 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" podStartSLOduration=128.959779761 podStartE2EDuration="2m8.959779761s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:55.959276777 +0000 UTC m=+150.144253853" watchObservedRunningTime="2026-02-16 14:55:55.959779761 +0000 UTC m=+150.144756837" Feb 16 14:55:55 crc kubenswrapper[4705]: W0216 14:55:55.963741 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8efc871_44f0_4bbd_b639_6adaee23319a.slice/crio-9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2 WatchSource:0}: Error finding container 9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2: Status 404 returned error can't find the container with id 9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018637 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.018658 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.020442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.022038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.059435 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"certified-operators-bw88w\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.064835 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.227068 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:55:56 crc kubenswrapper[4705]: W0216 14:55:56.274553 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1a76ff_82ae_4dac_88d2_20e6858835e3.slice/crio-8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938 WatchSource:0}: Error finding container 8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938: Status 404 returned error can't find the container with id 8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.437793 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.460639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.853563 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.853877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.855351 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859051 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859090 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.859107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerStarted","Data":"9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862309 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.862438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerStarted","Data":"84b0c4e14a3064d4d96f1f68cbab03b366c6b38944839fb2b7297a8f31d08a3b"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864313 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" exitCode=0 Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.864392 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerStarted","Data":"8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938"} Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.947982 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:56 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:56 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:56 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:56 crc kubenswrapper[4705]: I0216 14:55:56.948058 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.257985 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258052 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258103 4705 patch_prober.go:28] interesting pod/downloads-7954f5f757-cdb8w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.258187 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cdb8w" podUID="29292cac-8f57-4f0b-aeb5-b4b7db9b3e45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.339812 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.340911 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.344438 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.355301 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.442737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.442813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.443007 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544893 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.544970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.545386 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.545590 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.579394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"redhat-marketplace-gmh5s\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.658711 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.737906 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.738870 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.750388 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.757948 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.758678 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.760635 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.763422 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.780339 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.852539 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.946690 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:57 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:57 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:57 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.946776 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953668 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953751 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.953910 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.954810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.969184 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.973496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"redhat-marketplace-vb279\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:57 crc kubenswrapper[4705]: I0216 14:55:57.975255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.056631 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.078095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.328929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.342302 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.344505 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.348427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.362324 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:58 crc kubenswrapper[4705]: W0216 14:55:58.371213 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee875e7_6eab_4220_a29d_316c22f70703.slice/crio-8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413 WatchSource:0}: Error finding container 8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413: Status 404 returned error can't find the container with id 8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.401126 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: W0216 14:55:58.402507 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod517926f3_df0a_4a5d_8806_80753c810a82.slice/crio-587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21 WatchSource:0}: Error finding container 587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21: Status 404 returned error can't find the container with id 587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466063 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.466624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.520530 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.520623 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.525403 4705 patch_prober.go:28] interesting pod/console-f9d7485db-fnrqq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.525565 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fnrqq" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.568513 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.569056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.569296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.590683 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"redhat-operators-qkkgp\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.644984 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.645841 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.648499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.648882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.649123 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.699147 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.713646 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.746647 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.747989 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.754350 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.771792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.771847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876637 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876688 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.876520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900875 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6" exitCode=0 Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900957 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.900990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerStarted","Data":"9e7c06275441e0dc9753d3e97f80b0b2fa0173ed74928bf3711fd998b37c0d36"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.911167 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.920631 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.920682 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.934572 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerStarted","Data":"587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21"} Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.939549 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.945879 4705 patch_prober.go:28] interesting pod/router-default-5444994796-mw9hv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:58 crc kubenswrapper[4705]: [-]has-synced failed: reason withheld Feb 16 14:55:58 crc kubenswrapper[4705]: [+]process-running ok Feb 16 14:55:58 crc kubenswrapper[4705]: healthz check failed Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.945953 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mw9hv" podUID="06c99403-3b09-4401-aa04-41a0ff730c68" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.973326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.977948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.978019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.978091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.979911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.980145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:58 crc kubenswrapper[4705]: I0216 14:55:58.996197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"redhat-operators-jlgwg\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.082531 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.225828 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.511880 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:55:59 crc kubenswrapper[4705]: W0216 14:55:59.527175 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d685f5_d57e_434b_93c8_727195de9479.slice/crio-73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701 WatchSource:0}: Error finding container 73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701: Status 404 returned error can't find the container with id 73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.575451 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.943385 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.972313 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.972400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980098 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980173 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.980198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerStarted","Data":"73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996216 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996321 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10"} Feb 16 14:55:59 crc kubenswrapper[4705]: I0216 14:55:59.996352 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"5ca975ac41d20405951f16e100085714e84618ea7435589dc42061daef0e3c0d"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.007664 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerStarted","Data":"c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.026253 4705 generic.go:334] "Generic (PLEG): container finished" podID="517926f3-df0a-4a5d-8806-80753c810a82" containerID="b36af6dea40a2cc15704cf0e887eaea1973a1fc8db61b4e54a43cdebd09a1376" exitCode=0 Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.026845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerDied","Data":"b36af6dea40a2cc15704cf0e887eaea1973a1fc8db61b4e54a43cdebd09a1376"} Feb 16 14:56:00 crc kubenswrapper[4705]: I0216 14:56:00.037743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mw9hv" Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.069948 4705 generic.go:334] "Generic (PLEG): container finished" podID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerID="e8e80d6deccafa5829fccbb82a70b8cb3676a15871eda8d63e729b44d986ab2b" exitCode=0 Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.070200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerDied","Data":"e8e80d6deccafa5829fccbb82a70b8cb3676a15871eda8d63e729b44d986ab2b"} Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.685757 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:01 crc kubenswrapper[4705]: I0216 14:56:01.685819 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.559250 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650006 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") pod \"517926f3-df0a-4a5d-8806-80753c810a82\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") pod \"517926f3-df0a-4a5d-8806-80753c810a82\" (UID: \"517926f3-df0a-4a5d-8806-80753c810a82\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650189 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "517926f3-df0a-4a5d-8806-80753c810a82" (UID: "517926f3-df0a-4a5d-8806-80753c810a82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.650474 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/517926f3-df0a-4a5d-8806-80753c810a82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.653012 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.659110 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "517926f3-df0a-4a5d-8806-80753c810a82" (UID: "517926f3-df0a-4a5d-8806-80753c810a82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.751861 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") pod \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") pod \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\" (UID: \"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc\") " Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752504 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/517926f3-df0a-4a5d-8806-80753c810a82-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.752576 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" (UID: "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.757625 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" (UID: "fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.856047 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:02 crc kubenswrapper[4705]: I0216 14:56:02.856502 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc","Type":"ContainerDied","Data":"c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7"} Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092204 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c996240936f5eeaeca98d5df76308b904d1db1d2dd5b82b1ddb5ffdbdb9a01f7" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.092391 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097777 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"517926f3-df0a-4a5d-8806-80753c810a82","Type":"ContainerDied","Data":"587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21"} Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097835 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587a68b9589f40cbfffcce60269cdbad06a330c48fd7cf5bbf1c4ea8a61cec21" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.097942 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:56:03 crc kubenswrapper[4705]: I0216 14:56:03.749541 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hnkwm" Feb 16 14:56:07 crc kubenswrapper[4705]: I0216 14:56:07.265485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cdb8w" Feb 16 14:56:08 crc kubenswrapper[4705]: I0216 14:56:08.524984 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:56:08 crc kubenswrapper[4705]: I0216 14:56:08.532553 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.682939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.690495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67dea3c6-e6a4-4078-9bf2-6928c39f498b-metrics-certs\") pod \"network-metrics-daemon-8m64f\" (UID: \"67dea3c6-e6a4-4078-9bf2-6928c39f498b\") " pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:09 crc kubenswrapper[4705]: I0216 14:56:09.973630 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8m64f" Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.657085 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.657321 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" containerID="cri-o://579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" gracePeriod=30 Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.690501 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:12 crc kubenswrapper[4705]: I0216 14:56:12.691515 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" containerID="cri-o://b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" gracePeriod=30 Feb 16 14:56:14 crc kubenswrapper[4705]: I0216 14:56:14.853648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.187193 4705 generic.go:334] "Generic (PLEG): container finished" podID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerID="579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" exitCode=0 Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.187242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerDied","Data":"579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19"} Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.854817 4705 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s6knp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 14:56:15 crc kubenswrapper[4705]: I0216 14:56:15.854944 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 14:56:16 crc kubenswrapper[4705]: I0216 14:56:16.199521 4705 generic.go:334] "Generic (PLEG): container finished" podID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerID="b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" exitCode=0 Feb 16 14:56:16 crc kubenswrapper[4705]: I0216 14:56:16.199939 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerDied","Data":"b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d"} Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.384773 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.390827 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.463623 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.464340 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.464472 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.464589 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.464684 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.466847 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.466888 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: E0216 14:56:19.466914 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.466923 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467301 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="517926f3-df0a-4a5d-8806-80753c810a82" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467333 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" containerName="controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467344 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467361 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4b8d8e-ad1f-4ceb-b472-a38fac7cfdfc" containerName="pruner" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.467951 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.468070 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.533855 4705 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ksptd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.533997 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556597 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556771 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556818 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556847 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") pod \"a8302bc0-d3ed-4950-a728-5569d512a90c\" (UID: \"a8302bc0-d3ed-4950-a728-5569d512a90c\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.556986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") pod \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\" (UID: \"51cb62a1-dd06-4f6b-aa37-c824973a7df0\") " Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca" (OuterVolumeSpecName: "client-ca") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558453 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config" (OuterVolumeSpecName: "config") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.558657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.557339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559000 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config" (OuterVolumeSpecName: "config") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559103 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559243 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559257 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559269 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.559279 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8302bc0-d3ed-4950-a728-5569d512a90c-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.560173 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46" (OuterVolumeSpecName: "kube-api-access-x2k46") pod "a8302bc0-d3ed-4950-a728-5569d512a90c" (UID: "a8302bc0-d3ed-4950-a728-5569d512a90c"). InnerVolumeSpecName "kube-api-access-x2k46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.565637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.574003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd" (OuterVolumeSpecName: "kube-api-access-r5khd") pod "51cb62a1-dd06-4f6b-aa37-c824973a7df0" (UID: "51cb62a1-dd06-4f6b-aa37-c824973a7df0"). InnerVolumeSpecName "kube-api-access-r5khd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660387 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.660467 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661399 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5khd\" (UniqueName: \"kubernetes.io/projected/51cb62a1-dd06-4f6b-aa37-c824973a7df0-kube-api-access-r5khd\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661418 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2k46\" (UniqueName: \"kubernetes.io/projected/a8302bc0-d3ed-4950-a728-5569d512a90c-kube-api-access-x2k46\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661432 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8302bc0-d3ed-4950-a728-5569d512a90c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661445 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51cb62a1-dd06-4f6b-aa37-c824973a7df0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661456 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51cb62a1-dd06-4f6b-aa37-c824973a7df0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.661947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.662055 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.662509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.666890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.682062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"controller-manager-58c68744d-xl8vm\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:19 crc kubenswrapper[4705]: I0216 14:56:19.791325 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" event={"ID":"51cb62a1-dd06-4f6b-aa37-c824973a7df0","Type":"ContainerDied","Data":"68a02cdf61ab6ecf3bd32bb3e54bfbe8ef3fe251a6cfa9d9244adfdab9a8cc1a"} Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223213 4705 scope.go:117] "RemoveContainer" containerID="579ed418f5dc819f6c48558bfbfa22b50b82668164fdcd76aa1e3a094e7dce19" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.223357 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s6knp" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.238358 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" event={"ID":"a8302bc0-d3ed-4950-a728-5569d512a90c","Type":"ContainerDied","Data":"a885e38805c34d5c1e7c89b9f1f29de1c4b5e2713a9a9b37541794c592748f30"} Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.238519 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.267016 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.269603 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s6knp"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.274676 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.280000 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ksptd"] Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.427275 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cb62a1-dd06-4f6b-aa37-c824973a7df0" path="/var/lib/kubelet/pods/51cb62a1-dd06-4f6b-aa37-c824973a7df0/volumes" Feb 16 14:56:20 crc kubenswrapper[4705]: I0216 14:56:20.428198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8302bc0-d3ed-4950-a728-5569d512a90c" path="/var/lib/kubelet/pods/a8302bc0-d3ed-4950-a728-5569d512a90c/volumes" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.270858 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.272332 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.273891 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275201 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275208 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.275642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.276749 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.276850 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.289204 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406876 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.406946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.407035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.507996 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508313 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.508631 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.509591 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.509950 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.519148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.523258 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"route-controller-manager-7697b6646d-2fqmh\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:22 crc kubenswrapper[4705]: I0216 14:56:22.603906 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.022256 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.023492 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hr5j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ngfnt_openshift-marketplace(1f1a76ff-82ae-4dac-88d2-20e6858835e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.024413 4705 scope.go:117] "RemoveContainer" containerID="b86aba374934a2e53dfafb4487f9bd171946f1f9c67960302e0552580e0f1f6d" Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.024629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.298049 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b"} Feb 16 14:56:27 crc kubenswrapper[4705]: E0216 14:56:27.320690 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.414696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:27 crc kubenswrapper[4705]: W0216 14:56:27.427098 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f71fd9_bba2_481c_8b42_46894c93e49d.slice/crio-4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9 WatchSource:0}: Error finding container 4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9: Status 404 returned error can't find the container with id 4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9 Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.427696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8m64f"] Feb 16 14:56:27 crc kubenswrapper[4705]: W0216 14:56:27.430324 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67dea3c6_e6a4_4078_9bf2_6928c39f498b.slice/crio-0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045 WatchSource:0}: Error finding container 0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045: Status 404 returned error can't find the container with id 0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045 Feb 16 14:56:27 crc kubenswrapper[4705]: I0216 14:56:27.437428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:28 crc kubenswrapper[4705]: E0216 14:56:28.175752 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d685f5_d57e_434b_93c8_727195de9479.slice/crio-22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130.scope\": RecentStats: unable to find data in memory cache]" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.309712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerStarted","Data":"5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.309763 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerStarted","Data":"9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.311456 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.318722 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.319606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.339906 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.340015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.343097 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.343153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350115 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350233 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.350887 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" podStartSLOduration=16.350869877 podStartE2EDuration="16.350869877s" podCreationTimestamp="2026-02-16 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:28.348977915 +0000 UTC m=+182.533954991" watchObservedRunningTime="2026-02-16 14:56:28.350869877 +0000 UTC m=+182.535846973" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.356192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.359575 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerStarted","Data":"db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.359599 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerStarted","Data":"4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.360224 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.364925 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.369279 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.369326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.373573 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" exitCode=0 Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.373614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.390665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"3bad768853b1c2d8d2d2f1e547c5acf2aac3823d8b60521f81be7dba9e0d242e"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.390719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"0a2901551587f9516f8d8d162155036d25d2cfaa1ea31ea3ccfe8605b7197045"} Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.546137 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" podStartSLOduration=16.546101338 podStartE2EDuration="16.546101338s" podCreationTimestamp="2026-02-16 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:28.539049324 +0000 UTC m=+182.724026420" watchObservedRunningTime="2026-02-16 14:56:28.546101338 +0000 UTC m=+182.731078434" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.637975 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:28 crc kubenswrapper[4705]: I0216 14:56:28.676280 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qtmdz" Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.399734 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8m64f" event={"ID":"67dea3c6-e6a4-4078-9bf2-6928c39f498b","Type":"ContainerStarted","Data":"65daf2952e4d153e851655f006c9bc78eeec8179a7fd2a728b9c8943b8801e3e"} Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.404911 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7" exitCode=0 Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.405207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7"} Feb 16 14:56:29 crc kubenswrapper[4705]: I0216 14:56:29.420631 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8m64f" podStartSLOduration=162.420612055 podStartE2EDuration="2m42.420612055s" podCreationTimestamp="2026-02-16 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:29.419528165 +0000 UTC m=+183.604505251" watchObservedRunningTime="2026-02-16 14:56:29.420612055 +0000 UTC m=+183.605589121" Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.418991 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerStarted","Data":"3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870"} Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.440708 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmh5s" podStartSLOduration=2.941285534 podStartE2EDuration="34.440681969s" podCreationTimestamp="2026-02-16 14:55:57 +0000 UTC" firstStartedPulling="2026-02-16 14:55:58.904549763 +0000 UTC m=+153.089526829" lastFinishedPulling="2026-02-16 14:56:30.403946188 +0000 UTC m=+184.588923264" observedRunningTime="2026-02-16 14:56:31.440177075 +0000 UTC m=+185.625154161" watchObservedRunningTime="2026-02-16 14:56:31.440681969 +0000 UTC m=+185.625659045" Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.684390 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:31 crc kubenswrapper[4705]: I0216 14:56:31.684472 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.599999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.600226 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" containerID="cri-o://db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" gracePeriod=30 Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.702783 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:32 crc kubenswrapper[4705]: I0216 14:56:32.704709 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" containerID="cri-o://5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" gracePeriod=30 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.437602 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerStarted","Data":"44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439572 4705 generic.go:334] "Generic (PLEG): container finished" podID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerID="5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" exitCode=0 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerDied","Data":"5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" event={"ID":"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650","Type":"ContainerDied","Data":"9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.439667 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0411d516836163a23542eb670fb3eb2f699e5a31aa118fbcbbe952241a5c87" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.445336 4705 generic.go:334] "Generic (PLEG): container finished" podID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerID="db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" exitCode=0 Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.445419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerDied","Data":"db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8"} Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.462193 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkkgp" podStartSLOduration=3.142344556 podStartE2EDuration="35.462169241s" podCreationTimestamp="2026-02-16 14:55:58 +0000 UTC" firstStartedPulling="2026-02-16 14:56:00.002287212 +0000 UTC m=+154.187264288" lastFinishedPulling="2026-02-16 14:56:32.322111897 +0000 UTC m=+186.507088973" observedRunningTime="2026-02-16 14:56:33.459318122 +0000 UTC m=+187.644295218" watchObservedRunningTime="2026-02-16 14:56:33.462169241 +0000 UTC m=+187.647146317" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.481785 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.602640 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603252 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603331 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.603472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") pod \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\" (UID: \"3df1ce7e-6cc2-4619-8df9-bee8e9ae6650\") " Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.604059 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config" (OuterVolumeSpecName: "config") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.604615 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca" (OuterVolumeSpecName: "client-ca") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.612484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.617000 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4" (OuterVolumeSpecName: "kube-api-access-2zkw4") pod "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" (UID: "3df1ce7e-6cc2-4619-8df9-bee8e9ae6650"). InnerVolumeSpecName "kube-api-access-2zkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705324 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705436 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zkw4\" (UniqueName: \"kubernetes.io/projected/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-kube-api-access-2zkw4\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705451 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.705460 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:33 crc kubenswrapper[4705]: I0216 14:56:33.855314 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009488 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009514 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.009604 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") pod \"42f71fd9-bba2-481c-8b42-46894c93e49d\" (UID: \"42f71fd9-bba2-481c-8b42-46894c93e49d\") " Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010432 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config" (OuterVolumeSpecName: "config") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010557 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.010586 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca" (OuterVolumeSpecName: "client-ca") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.017943 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.018122 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff" (OuterVolumeSpecName: "kube-api-access-wcrff") pod "42f71fd9-bba2-481c-8b42-46894c93e49d" (UID: "42f71fd9-bba2-481c-8b42-46894c93e49d"). InnerVolumeSpecName "kube-api-access-wcrff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111681 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111725 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42f71fd9-bba2-481c-8b42-46894c93e49d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111736 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111745 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42f71fd9-bba2-481c-8b42-46894c93e49d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.111758 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcrff\" (UniqueName: \"kubernetes.io/projected/42f71fd9-bba2-481c-8b42-46894c93e49d-kube-api-access-wcrff\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.281779 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:34 crc kubenswrapper[4705]: E0216 14:56:34.282214 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282237 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: E0216 14:56:34.282248 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282256 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282380 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" containerName="controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282399 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" containerName="route-controller-manager" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.282941 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.285737 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.286818 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.295486 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.299760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430065 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430120 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430239 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430263 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430388 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.430421 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" event={"ID":"42f71fd9-bba2-481c-8b42-46894c93e49d","Type":"ContainerDied","Data":"4c009eecd078d5136e14450030093aead0456135c5ec094b988fd4282fa296e9"} Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453529 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c68744d-xl8vm" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.453554 4705 scope.go:117] "RemoveContainer" containerID="db98b705cf0086bcf97e02e77a379aa0f51c317cbb3fc2152c663e22ae52b5a8" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.456442 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.456514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerStarted","Data":"d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b"} Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.498535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvxpr" podStartSLOduration=2.872769518 podStartE2EDuration="39.498517812s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.855128762 +0000 UTC m=+151.040105828" lastFinishedPulling="2026-02-16 14:56:33.480877036 +0000 UTC m=+187.665854122" observedRunningTime="2026-02-16 14:56:34.478495371 +0000 UTC m=+188.663472467" watchObservedRunningTime="2026-02-16 14:56:34.498517812 +0000 UTC m=+188.683494888" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.498738 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.502343 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c68744d-xl8vm"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.508041 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.510732 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7697b6646d-2fqmh"] Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.531636 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532294 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.532331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.533964 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.534074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.535955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.539880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.542854 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.551241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"controller-manager-7f948cbdb-xlnlb\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.552214 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.555254 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"route-controller-manager-d57cf7986-vpzs2\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.638564 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:34 crc kubenswrapper[4705]: I0216 14:56:34.659326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:35 crc kubenswrapper[4705]: I0216 14:56:35.542591 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:35 crc kubenswrapper[4705]: I0216 14:56:35.542666 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.224243 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.226794 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.228933 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.229742 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.236572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.257803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.257865 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359178 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359268 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.359397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.378661 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.427306 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df1ce7e-6cc2-4619-8df9-bee8e9ae6650" path="/var/lib/kubelet/pods/3df1ce7e-6cc2-4619-8df9-bee8e9ae6650/volumes" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.428229 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f71fd9-bba2-481c-8b42-46894c93e49d" path="/var/lib/kubelet/pods/42f71fd9-bba2-481c-8b42-46894c93e49d/volumes" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.436639 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.522521 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:56:36 crc kubenswrapper[4705]: I0216 14:56:36.568735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.047390 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:37 crc kubenswrapper[4705]: W0216 14:56:37.056467 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc70a9e_0338_4f1f_8c4b_1ef8d62b424a.slice/crio-c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2 WatchSource:0}: Error finding container c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2: Status 404 returned error can't find the container with id c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2 Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.119655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:37 crc kubenswrapper[4705]: W0216 14:56:37.141883 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod082d4064_6b1c_4a39_9839_3466e7a1ce3a.slice/crio-1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4 WatchSource:0}: Error finding container 1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4: Status 404 returned error can't find the container with id 1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4 Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.202780 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.476450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerStarted","Data":"7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.481770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerStarted","Data":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerStarted","Data":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483518 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerStarted","Data":"1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.483616 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.485540 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerStarted","Data":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.488000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerStarted","Data":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.490840 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerStarted","Data":"d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493216 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerStarted","Data":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493247 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerStarted","Data":"c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2"} Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.493842 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.528617 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bw88w" podStartSLOduration=2.926467036 podStartE2EDuration="42.528596081s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.86342465 +0000 UTC m=+151.048401726" lastFinishedPulling="2026-02-16 14:56:36.465553695 +0000 UTC m=+190.650530771" observedRunningTime="2026-02-16 14:56:37.511466639 +0000 UTC m=+191.696443725" watchObservedRunningTime="2026-02-16 14:56:37.528596081 +0000 UTC m=+191.713573157" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.531462 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vb279" podStartSLOduration=3.028993978 podStartE2EDuration="40.531449729s" podCreationTimestamp="2026-02-16 14:55:57 +0000 UTC" firstStartedPulling="2026-02-16 14:55:58.92586426 +0000 UTC m=+153.110841336" lastFinishedPulling="2026-02-16 14:56:36.428320011 +0000 UTC m=+190.613297087" observedRunningTime="2026-02-16 14:56:37.52785928 +0000 UTC m=+191.712836356" watchObservedRunningTime="2026-02-16 14:56:37.531449729 +0000 UTC m=+191.716426815" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.571125 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jlgwg" podStartSLOduration=3.048399132 podStartE2EDuration="39.57109014s" podCreationTimestamp="2026-02-16 14:55:58 +0000 UTC" firstStartedPulling="2026-02-16 14:55:59.984955215 +0000 UTC m=+154.169932291" lastFinishedPulling="2026-02-16 14:56:36.507646223 +0000 UTC m=+190.692623299" observedRunningTime="2026-02-16 14:56:37.566055541 +0000 UTC m=+191.751032617" watchObservedRunningTime="2026-02-16 14:56:37.57109014 +0000 UTC m=+191.756067216" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.634925 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sj9bt" podStartSLOduration=4.450408079 podStartE2EDuration="42.634903975s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.860535181 +0000 UTC m=+151.045512257" lastFinishedPulling="2026-02-16 14:56:35.045031087 +0000 UTC m=+189.230008153" observedRunningTime="2026-02-16 14:56:37.631615325 +0000 UTC m=+191.816592411" watchObservedRunningTime="2026-02-16 14:56:37.634903975 +0000 UTC m=+191.819881041" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.636426 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" podStartSLOduration=5.636419347 podStartE2EDuration="5.636419347s" podCreationTimestamp="2026-02-16 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:37.593952409 +0000 UTC m=+191.778929515" watchObservedRunningTime="2026-02-16 14:56:37.636419347 +0000 UTC m=+191.821396423" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.659502 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.659566 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.735544 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:37 crc kubenswrapper[4705]: I0216 14:56:37.774680 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" podStartSLOduration=5.7746517 podStartE2EDuration="5.7746517s" podCreationTimestamp="2026-02-16 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:37.673879667 +0000 UTC m=+191.858856753" watchObservedRunningTime="2026-02-16 14:56:37.7746517 +0000 UTC m=+191.959628776" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.057780 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.057840 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:38 crc kubenswrapper[4705]: E0216 14:56:38.308811 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd10e6ed9_d49d_45c6_8cbd_536751ec37d4.slice/crio-conmon-c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.500472 4705 generic.go:334] "Generic (PLEG): container finished" podID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerID="c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b" exitCode=0 Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.500697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerDied","Data":"c5a2e101d0b2cb0b252dbd909f60f1ae14bedee66e9cdaa812c669200d50d06b"} Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.501923 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.508550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.573129 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.700226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:38 crc kubenswrapper[4705]: I0216 14:56:38.700275 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.083706 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.083765 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.117705 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vb279" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:39 crc kubenswrapper[4705]: > Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.746347 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkkgp" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:39 crc kubenswrapper[4705]: > Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.885965 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917389 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") pod \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") pod \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\" (UID: \"d10e6ed9-d49d-45c6-8cbd-536751ec37d4\") " Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.917553 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d10e6ed9-d49d-45c6-8cbd-536751ec37d4" (UID: "d10e6ed9-d49d-45c6-8cbd-536751ec37d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.918867 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4705]: I0216 14:56:39.943750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d10e6ed9-d49d-45c6-8cbd-536751ec37d4" (UID: "d10e6ed9-d49d-45c6-8cbd-536751ec37d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.020169 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d10e6ed9-d49d-45c6-8cbd-536751ec37d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.130230 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jlgwg" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:40 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:40 crc kubenswrapper[4705]: > Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.513296 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.514147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d10e6ed9-d49d-45c6-8cbd-536751ec37d4","Type":"ContainerDied","Data":"7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0"} Feb 16 14:56:40 crc kubenswrapper[4705]: I0216 14:56:40.514258 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7afe14e3111f637d23e68bc4226f8826241d6020b90b0d9c519f97d3c5c994b0" Feb 16 14:56:41 crc kubenswrapper[4705]: I0216 14:56:41.520335 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" exitCode=0 Feb 16 14:56:41 crc kubenswrapper[4705]: I0216 14:56:41.520405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5"} Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.222353 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:43 crc kubenswrapper[4705]: E0216 14:56:43.225218 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225236 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225343 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10e6ed9-d49d-45c6-8cbd-536751ec37d4" containerName="pruner" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.225852 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.230184 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.231214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.231297 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.277803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.379832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.380136 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.401626 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"installer-9-crc\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.538962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerStarted","Data":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} Feb 16 14:56:43 crc kubenswrapper[4705]: I0216 14:56:43.544354 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.029636 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:44 crc kubenswrapper[4705]: W0216 14:56:44.050662 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6b45f345_45b8_4e21_a4da_46e4d43e429e.slice/crio-87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014 WatchSource:0}: Error finding container 87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014: Status 404 returned error can't find the container with id 87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014 Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.544675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerStarted","Data":"8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc"} Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.545077 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerStarted","Data":"87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014"} Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.567493 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.567468103 podStartE2EDuration="1.567468103s" podCreationTimestamp="2026-02-16 14:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:44.562478641 +0000 UTC m=+198.747455727" watchObservedRunningTime="2026-02-16 14:56:44.567468103 +0000 UTC m=+198.752445199" Feb 16 14:56:44 crc kubenswrapper[4705]: I0216 14:56:44.588381 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngfnt" podStartSLOduration=3.489818689 podStartE2EDuration="49.588350073s" podCreationTimestamp="2026-02-16 14:55:55 +0000 UTC" firstStartedPulling="2026-02-16 14:55:56.86667902 +0000 UTC m=+151.051656106" lastFinishedPulling="2026-02-16 14:56:42.965210404 +0000 UTC m=+197.150187490" observedRunningTime="2026-02-16 14:56:44.587464698 +0000 UTC m=+198.772441794" watchObservedRunningTime="2026-02-16 14:56:44.588350073 +0000 UTC m=+198.773327149" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.614887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.660873 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.660958 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.718005 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.860721 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.860792 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:45 crc kubenswrapper[4705]: I0216 14:56:45.903045 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.065714 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.065791 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.114802 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.629026 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:46 crc kubenswrapper[4705]: I0216 14:56:46.632293 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.156026 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.175044 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.229994 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.570855 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bw88w" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" containerID="cri-o://bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" gracePeriod=2 Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.763819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:48 crc kubenswrapper[4705]: I0216 14:56:48.826963 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.063182 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.136531 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176025 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176145 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.176311 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") pod \"37d84ef8-6e1f-4126-8356-189afb52b629\" (UID: \"37d84ef8-6e1f-4126-8356-189afb52b629\") " Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.179848 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities" (OuterVolumeSpecName: "utilities") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.185579 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65" (OuterVolumeSpecName: "kube-api-access-ntm65") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "kube-api-access-ntm65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.185815 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.265861 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37d84ef8-6e1f-4126-8356-189afb52b629" (UID: "37d84ef8-6e1f-4126-8356-189afb52b629"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279266 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279296 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d84ef8-6e1f-4126-8356-189afb52b629-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.279317 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntm65\" (UniqueName: \"kubernetes.io/projected/37d84ef8-6e1f-4126-8356-189afb52b629-kube-api-access-ntm65\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590082 4705 generic.go:334] "Generic (PLEG): container finished" podID="37d84ef8-6e1f-4126-8356-189afb52b629" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590189 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590296 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bw88w" event={"ID":"37d84ef8-6e1f-4126-8356-189afb52b629","Type":"ContainerDied","Data":"84b0c4e14a3064d4d96f1f68cbab03b366c6b38944839fb2b7297a8f31d08a3b"} Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590209 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bw88w" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.590358 4705 scope.go:117] "RemoveContainer" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.623926 4705 scope.go:117] "RemoveContainer" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.641877 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.647533 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bw88w"] Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.668749 4705 scope.go:117] "RemoveContainer" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.684200 4705 scope.go:117] "RemoveContainer" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.684714 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": container with ID starting with bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d not found: ID does not exist" containerID="bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.684869 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d"} err="failed to get container status \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": rpc error: code = NotFound desc = could not find container \"bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d\": container with ID starting with bc0b9ce030b5c9c0eec0f055ef42b2ba473764fdf7e8b6f60bd40db06226ac6d not found: ID does not exist" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685030 4705 scope.go:117] "RemoveContainer" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.685535 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": container with ID starting with fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46 not found: ID does not exist" containerID="fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685579 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46"} err="failed to get container status \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": rpc error: code = NotFound desc = could not find container \"fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46\": container with ID starting with fcd6fa9c54448db3773459b702e23ad3da60475d827c659b5619a0523d327c46 not found: ID does not exist" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.685625 4705 scope.go:117] "RemoveContainer" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: E0216 14:56:49.691134 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": container with ID starting with 2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176 not found: ID does not exist" containerID="2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176" Feb 16 14:56:49 crc kubenswrapper[4705]: I0216 14:56:49.691205 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176"} err="failed to get container status \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": rpc error: code = NotFound desc = could not find container \"2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176\": container with ID starting with 2fd23d4ad56812e5fc16650b24cb4be89db6f43ca85cc9225b1e241859ca5176 not found: ID does not exist" Feb 16 14:56:50 crc kubenswrapper[4705]: I0216 14:56:50.434776 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" path="/var/lib/kubelet/pods/37d84ef8-6e1f-4126-8356-189afb52b629/volumes" Feb 16 14:56:51 crc kubenswrapper[4705]: I0216 14:56:51.541968 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:51 crc kubenswrapper[4705]: I0216 14:56:51.542420 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vb279" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" containerID="cri-o://6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" gracePeriod=2 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.118540 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.224576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.225112 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.225302 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") pod \"0ee875e7-6eab-4220-a29d-316c22f70703\" (UID: \"0ee875e7-6eab-4220-a29d-316c22f70703\") " Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.227778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities" (OuterVolumeSpecName: "utilities") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.237829 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv" (OuterVolumeSpecName: "kube-api-access-8kmkv") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "kube-api-access-8kmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.286151 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee875e7-6eab-4220-a29d-316c22f70703" (UID: "0ee875e7-6eab-4220-a29d-316c22f70703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327675 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327736 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kmkv\" (UniqueName: \"kubernetes.io/projected/0ee875e7-6eab-4220-a29d-316c22f70703-kube-api-access-8kmkv\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.327753 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee875e7-6eab-4220-a29d-316c22f70703-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.543008 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.543346 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jlgwg" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" containerID="cri-o://9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" gracePeriod=2 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.615857 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ee875e7-6eab-4220-a29d-316c22f70703" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" exitCode=0 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616397 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb279" event={"ID":"0ee875e7-6eab-4220-a29d-316c22f70703","Type":"ContainerDied","Data":"8eacf80745eba9b4023ca71499503eec2319ce40818e105b2747f4b39c4b0413"} Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616421 4705 scope.go:117] "RemoveContainer" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.616516 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb279" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.647765 4705 scope.go:117] "RemoveContainer" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.649999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.667534 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb279"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.671786 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.672032 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" containerID="cri-o://6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" gracePeriod=30 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.710525 4705 scope.go:117] "RemoveContainer" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.710659 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.711151 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" containerID="cri-o://6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" gracePeriod=30 Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.855688 4705 scope.go:117] "RemoveContainer" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.856171 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": container with ID starting with 6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d not found: ID does not exist" containerID="6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856214 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d"} err="failed to get container status \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": rpc error: code = NotFound desc = could not find container \"6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d\": container with ID starting with 6506c1b3c95f64bc4dc9f19bc688787af07df893e443540603efd868d9fbdc5d not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856237 4705 scope.go:117] "RemoveContainer" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.856927 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": container with ID starting with a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507 not found: ID does not exist" containerID="a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856966 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507"} err="failed to get container status \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": rpc error: code = NotFound desc = could not find container \"a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507\": container with ID starting with a79cb7b9efe77ffd0a9af097ca390267a6c48ca7b6ee79cb3e07f02638a7a507 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.856995 4705 scope.go:117] "RemoveContainer" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: E0216 14:56:52.857555 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": container with ID starting with d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826 not found: ID does not exist" containerID="d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826" Feb 16 14:56:52 crc kubenswrapper[4705]: I0216 14:56:52.857575 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826"} err="failed to get container status \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": rpc error: code = NotFound desc = could not find container \"d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826\": container with ID starting with d65ce7773e2e55871f105bd366e8f72671da134394df669fdd38f9ccb905a826 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.053536 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143520 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143663 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.143685 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") pod \"c6d685f5-d57e-434b-93c8-727195de9479\" (UID: \"c6d685f5-d57e-434b-93c8-727195de9479\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.145193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities" (OuterVolumeSpecName: "utilities") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.150314 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr" (OuterVolumeSpecName: "kube-api-access-hjsqr") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "kube-api-access-hjsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.245284 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.245331 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjsqr\" (UniqueName: \"kubernetes.io/projected/c6d685f5-d57e-434b-93c8-727195de9479-kube-api-access-hjsqr\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.252907 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.295616 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d685f5-d57e-434b-93c8-727195de9479" (UID: "c6d685f5-d57e-434b-93c8-727195de9479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.302443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346476 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346524 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346675 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346769 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.346794 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") pod \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\" (UID: \"082d4064-6b1c-4a39-9839-3466e7a1ce3a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347809 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347869 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.347907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") pod \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\" (UID: \"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a\") " Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348253 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348271 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d685f5-d57e-434b-93c8-727195de9479-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.348461 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config" (OuterVolumeSpecName: "config") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349335 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config" (OuterVolumeSpecName: "config") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349403 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349420 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.349485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq" (OuterVolumeSpecName: "kube-api-access-jkncq") pod "082d4064-6b1c-4a39-9839-3466e7a1ce3a" (UID: "082d4064-6b1c-4a39-9839-3466e7a1ce3a"). InnerVolumeSpecName "kube-api-access-jkncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.350049 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.350830 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f" (OuterVolumeSpecName: "kube-api-access-gg54f") pod "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" (UID: "6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a"). InnerVolumeSpecName "kube-api-access-gg54f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450145 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450210 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg54f\" (UniqueName: \"kubernetes.io/projected/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-kube-api-access-gg54f\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450233 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450253 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkncq\" (UniqueName: \"kubernetes.io/projected/082d4064-6b1c-4a39-9839-3466e7a1ce3a-kube-api-access-jkncq\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450272 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082d4064-6b1c-4a39-9839-3466e7a1ce3a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450290 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450306 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.450322 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/082d4064-6b1c-4a39-9839-3466e7a1ce3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629417 4705 generic.go:334] "Generic (PLEG): container finished" podID="c6d685f5-d57e-434b-93c8-727195de9479" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629459 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.630657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlgwg" event={"ID":"c6d685f5-d57e-434b-93c8-727195de9479","Type":"ContainerDied","Data":"73444c3bc58c0f167a866ff98a950aa8d535f52acd246e74ec5adc8c7a296701"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.629522 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlgwg" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.630719 4705 scope.go:117] "RemoveContainer" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633218 4705 generic.go:334] "Generic (PLEG): container finished" podID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633279 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerDied","Data":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633289 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.633303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2" event={"ID":"6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a","Type":"ContainerDied","Data":"c9f0bf0d686fb65c6bb4b6a7fd081881c8f7f5daa12afe94cab4eb77f10377b2"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.636943 4705 generic.go:334] "Generic (PLEG): container finished" podID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" exitCode=0 Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637043 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerDied","Data":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" event={"ID":"082d4064-6b1c-4a39-9839-3466e7a1ce3a","Type":"ContainerDied","Data":"1d19aea73538acf633cedd140eca18425eeaced17742fab95f70baed7c7b2be4"} Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.637043 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f948cbdb-xlnlb" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.668504 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.686256 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f948cbdb-xlnlb"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.689581 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.690475 4705 scope.go:117] "RemoveContainer" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.691557 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jlgwg"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.708833 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.712242 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57cf7986-vpzs2"] Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.724243 4705 scope.go:117] "RemoveContainer" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.751983 4705 scope.go:117] "RemoveContainer" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.752751 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": container with ID starting with 9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234 not found: ID does not exist" containerID="9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.752825 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234"} err="failed to get container status \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": rpc error: code = NotFound desc = could not find container \"9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234\": container with ID starting with 9aa9acb381319a500ae19e5bf6c51ff8ae7b30c87966c93c2d318f8fdea59234 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.752882 4705 scope.go:117] "RemoveContainer" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.753504 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": container with ID starting with 22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130 not found: ID does not exist" containerID="22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.753582 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130"} err="failed to get container status \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": rpc error: code = NotFound desc = could not find container \"22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130\": container with ID starting with 22de517402522635bdbd00777c5cb3b9d74d5a0f06ad6428946443687a3fd130 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.753631 4705 scope.go:117] "RemoveContainer" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.754218 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": container with ID starting with fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4 not found: ID does not exist" containerID="fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.754259 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4"} err="failed to get container status \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": rpc error: code = NotFound desc = could not find container \"fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4\": container with ID starting with fd1901af5b6c33421d827d4752faaeaac83efaae65835ae3f7ac90854e5e8fc4 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.754288 4705 scope.go:117] "RemoveContainer" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.774178 4705 scope.go:117] "RemoveContainer" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.774949 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": container with ID starting with 6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48 not found: ID does not exist" containerID="6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.775021 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48"} err="failed to get container status \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": rpc error: code = NotFound desc = could not find container \"6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48\": container with ID starting with 6f451a1f88db336e52ad907e9c9930dbc0372b366c135e85f4c2a7249a610e48 not found: ID does not exist" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.775070 4705 scope.go:117] "RemoveContainer" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.791012 4705 scope.go:117] "RemoveContainer" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: E0216 14:56:53.791632 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": container with ID starting with 6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c not found: ID does not exist" containerID="6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c" Feb 16 14:56:53 crc kubenswrapper[4705]: I0216 14:56:53.791682 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c"} err="failed to get container status \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": rpc error: code = NotFound desc = could not find container \"6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c\": container with ID starting with 6905c3ab855e43420401ee2347f8aca171f12b85a7939e67d5d2b455631aa79c not found: ID does not exist" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302482 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302922 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302950 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302964 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.302980 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.302992 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303018 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303036 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303054 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303067 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303081 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303094 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303117 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303130 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303146 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303160 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303185 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303199 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="extract-utilities" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303215 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303228 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="extract-content" Feb 16 14:56:54 crc kubenswrapper[4705]: E0216 14:56:54.303244 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303257 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303471 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303504 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d685f5-d57e-434b-93c8-727195de9479" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303525 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" containerName="route-controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303547 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d84ef8-6e1f-4126-8356-189afb52b629" containerName="registry-server" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.303564 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" containerName="controller-manager" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.304253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.308579 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.309065 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310230 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.310712 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311006 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311835 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.311982 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.317944 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.318481 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.318518 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.320872 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.322568 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.323179 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.327781 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.329044 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.350547 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.367712 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.367799 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368336 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368404 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368510 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.368767 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.429328 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082d4064-6b1c-4a39-9839-3466e7a1ce3a" path="/var/lib/kubelet/pods/082d4064-6b1c-4a39-9839-3466e7a1ce3a/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.430587 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee875e7-6eab-4220-a29d-316c22f70703" path="/var/lib/kubelet/pods/0ee875e7-6eab-4220-a29d-316c22f70703/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.431575 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a" path="/var/lib/kubelet/pods/6cc70a9e-0338-4f1f-8c4b-1ef8d62b424a/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.433210 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d685f5-d57e-434b-93c8-727195de9479" path="/var/lib/kubelet/pods/c6d685f5-d57e-434b-93c8-727195de9479/volumes" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470777 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.470856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472805 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472913 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.472975 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.473892 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.474241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.475276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.475277 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.479272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.479421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.496632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"controller-manager-7b958878b7-5qdcz\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.501851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"route-controller-manager-58d84ddb98-m6r9v\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.649918 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:54 crc kubenswrapper[4705]: I0216 14:56:54.665383 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.141315 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:56:55 crc kubenswrapper[4705]: W0216 14:56:55.151594 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c8a460_a52e_4669_bce1_28110d7d1d84.slice/crio-2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796 WatchSource:0}: Error finding container 2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796: Status 404 returned error can't find the container with id 2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796 Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.165131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:56:55 crc kubenswrapper[4705]: W0216 14:56:55.183594 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd37fdcf5_d38d_4ee6_a395_67c634cc101d.slice/crio-11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed WatchSource:0}: Error finding container 11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed: Status 404 returned error can't find the container with id 11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.692561 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerStarted","Data":"d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.693098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerStarted","Data":"11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.693124 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.701730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerStarted","Data":"4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.701797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerStarted","Data":"2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796"} Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.704820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.714232 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" podStartSLOduration=3.714199205 podStartE2EDuration="3.714199205s" podCreationTimestamp="2026-02-16 14:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:55.712199618 +0000 UTC m=+209.897176694" watchObservedRunningTime="2026-02-16 14:56:55.714199205 +0000 UTC m=+209.899176291" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.902237 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:55 crc kubenswrapper[4705]: I0216 14:56:55.925853 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" podStartSLOduration=3.925821789 podStartE2EDuration="3.925821789s" podCreationTimestamp="2026-02-16 14:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:55.761722649 +0000 UTC m=+209.946699725" watchObservedRunningTime="2026-02-16 14:56:55.925821789 +0000 UTC m=+210.110798875" Feb 16 14:56:56 crc kubenswrapper[4705]: I0216 14:56:56.721982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:56 crc kubenswrapper[4705]: I0216 14:56:56.731670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:56:58 crc kubenswrapper[4705]: I0216 14:56:58.951478 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:58 crc kubenswrapper[4705]: I0216 14:56:58.954559 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngfnt" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" containerID="cri-o://d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" gracePeriod=2 Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.528179 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662071 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662142 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.662229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") pod \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\" (UID: \"1f1a76ff-82ae-4dac-88d2-20e6858835e3\") " Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.663089 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities" (OuterVolumeSpecName: "utilities") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.668490 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9" (OuterVolumeSpecName: "kube-api-access-hr5j9") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "kube-api-access-hr5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.707436 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f1a76ff-82ae-4dac-88d2-20e6858835e3" (UID: "1f1a76ff-82ae-4dac-88d2-20e6858835e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746431 4705 generic.go:334] "Generic (PLEG): container finished" podID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" exitCode=0 Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746489 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngfnt" event={"ID":"1f1a76ff-82ae-4dac-88d2-20e6858835e3","Type":"ContainerDied","Data":"8b27691923de02efc4eecc71d986b393c2bd7333093c0fb98186573296fa7938"} Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746564 4705 scope.go:117] "RemoveContainer" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.746711 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngfnt" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763507 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr5j9\" (UniqueName: \"kubernetes.io/projected/1f1a76ff-82ae-4dac-88d2-20e6858835e3-kube-api-access-hr5j9\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763538 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.763548 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1a76ff-82ae-4dac-88d2-20e6858835e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.778603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.783044 4705 scope.go:117] "RemoveContainer" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.786156 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngfnt"] Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.798778 4705 scope.go:117] "RemoveContainer" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.812902 4705 scope.go:117] "RemoveContainer" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.813600 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": container with ID starting with d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8 not found: ID does not exist" containerID="d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.813659 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8"} err="failed to get container status \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": rpc error: code = NotFound desc = could not find container \"d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8\": container with ID starting with d2e58c3d2dae0aa6bebc1befc17a7169306bfc3d3da9e0b6ec10eda996a26ed8 not found: ID does not exist" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.813701 4705 scope.go:117] "RemoveContainer" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.814111 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": container with ID starting with e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5 not found: ID does not exist" containerID="e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814177 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5"} err="failed to get container status \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": rpc error: code = NotFound desc = could not find container \"e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5\": container with ID starting with e0d3d242244446e788d2c9ff20ec1dc86b44ac0fb96a7658800fbb9722a8bdf5 not found: ID does not exist" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814221 4705 scope.go:117] "RemoveContainer" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: E0216 14:56:59.814628 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": container with ID starting with 79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802 not found: ID does not exist" containerID="79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802" Feb 16 14:56:59 crc kubenswrapper[4705]: I0216 14:56:59.814658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802"} err="failed to get container status \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": rpc error: code = NotFound desc = could not find container \"79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802\": container with ID starting with 79ebebbf3c9c3a97ebb62e1aa7a967dd687fef9087b253b212c25a08da312802 not found: ID does not exist" Feb 16 14:57:00 crc kubenswrapper[4705]: I0216 14:57:00.425790 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" path="/var/lib/kubelet/pods/1f1a76ff-82ae-4dac-88d2-20e6858835e3/volumes" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.556977 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" containerID="cri-o://1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" gracePeriod=15 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688536 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688618 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.688693 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.689504 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.689580 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" gracePeriod=600 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.762715 4705 generic.go:334] "Generic (PLEG): container finished" podID="100a207c-bfcf-42aa-8233-f760df5a3888" containerID="1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" exitCode=0 Feb 16 14:57:01 crc kubenswrapper[4705]: I0216 14:57:01.762764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerDied","Data":"1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.034828 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093599 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093637 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093659 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093689 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093748 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.093784 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094115 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094157 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094194 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094222 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") pod \"100a207c-bfcf-42aa-8233-f760df5a3888\" (UID: \"100a207c-bfcf-42aa-8233-f760df5a3888\") " Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094724 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094721 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094776 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.094854 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.095069 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096251 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096268 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096278 4705 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096287 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.096298 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/100a207c-bfcf-42aa-8233-f760df5a3888-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101453 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg" (OuterVolumeSpecName: "kube-api-access-r92bg") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "kube-api-access-r92bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.101588 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102291 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102895 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.102972 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.103309 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.105591 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.109014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "100a207c-bfcf-42aa-8233-f760df5a3888" (UID: "100a207c-bfcf-42aa-8233-f760df5a3888"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197088 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92bg\" (UniqueName: \"kubernetes.io/projected/100a207c-bfcf-42aa-8233-f760df5a3888-kube-api-access-r92bg\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197137 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197151 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197162 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197173 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197184 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197198 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197212 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.197222 4705 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/100a207c-bfcf-42aa-8233-f760df5a3888-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" event={"ID":"100a207c-bfcf-42aa-8233-f760df5a3888","Type":"ContainerDied","Data":"fe3b81e0998e2210d66b3abc493b07a92c35082c815c3be49cace950ab5014e7"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770493 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mqkpd" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.770643 4705 scope.go:117] "RemoveContainer" containerID="1ab62a114c8a82ff2f7a49e4541517f644160b299d9d80b4f883f76fa7d4c60d" Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772569 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" exitCode=0 Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.772655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.802290 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:57:02 crc kubenswrapper[4705]: I0216 14:57:02.807529 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mqkpd"] Feb 16 14:57:04 crc kubenswrapper[4705]: I0216 14:57:04.448897 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" path="/var/lib/kubelet/pods/100a207c-bfcf-42aa-8233-f760df5a3888/volumes" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.311464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312099 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-content" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312115 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-content" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312140 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312147 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312166 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-utilities" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312175 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="extract-utilities" Feb 16 14:57:11 crc kubenswrapper[4705]: E0216 14:57:11.312185 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312195 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312318 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1a76ff-82ae-4dac-88d2-20e6858835e3" containerName="registry-server" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312338 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="100a207c-bfcf-42aa-8233-f760df5a3888" containerName="oauth-openshift" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.312867 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317150 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317335 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317306 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.317390 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.318689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319097 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319610 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.319430 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.331681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.332650 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.332992 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.342120 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454745 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454808 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454838 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.454897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455113 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455137 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455284 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.455310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.556944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557030 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557049 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.557068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562660 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562857 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.562977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563122 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563194 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.563978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-dir\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.565635 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.566388 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.568645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f077276-54eb-47be-a85c-46b0942e1bb6-audit-policies\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.570602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.572797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.578734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582265 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582289 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-system-session\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.582769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-error\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.583688 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f077276-54eb-47be-a85c-46b0942e1bb6-v4-0-config-user-template-login\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.585921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzr4\" (UniqueName: \"kubernetes.io/projected/7f077276-54eb-47be-a85c-46b0942e1bb6-kube-api-access-mzzr4\") pod \"oauth-openshift-54f7c55fd8-nrlnt\" (UID: \"7f077276-54eb-47be-a85c-46b0942e1bb6\") " pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:11 crc kubenswrapper[4705]: I0216 14:57:11.627927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.028448 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt"] Feb 16 14:57:12 crc kubenswrapper[4705]: W0216 14:57:12.034265 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f077276_54eb_47be_a85c_46b0942e1bb6.slice/crio-dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915 WatchSource:0}: Error finding container dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915: Status 404 returned error can't find the container with id dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.610391 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.611144 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" containerID="cri-o://d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" gracePeriod=30 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.703659 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.704170 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" containerID="cri-o://4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" gracePeriod=30 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.840919 4705 generic.go:334] "Generic (PLEG): container finished" podID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerID="d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" exitCode=0 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.840995 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerDied","Data":"d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.842768 4705 generic.go:334] "Generic (PLEG): container finished" podID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerID="4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" exitCode=0 Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.842860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerDied","Data":"4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.844584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" event={"ID":"7f077276-54eb-47be-a85c-46b0942e1bb6","Type":"ContainerStarted","Data":"7907afb5950b10f1cf524c738d4d96e0cb00c8e64bc7e97049284e9d20c7ccea"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.844615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" event={"ID":"7f077276-54eb-47be-a85c-46b0942e1bb6","Type":"ContainerStarted","Data":"dd8fa0fbc1d660206c9f656faff29a1a071b95cf3acb19db04864828f3ad3915"} Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.845066 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.857071 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" Feb 16 14:57:12 crc kubenswrapper[4705]: I0216 14:57:12.898516 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54f7c55fd8-nrlnt" podStartSLOduration=36.898499644 podStartE2EDuration="36.898499644s" podCreationTimestamp="2026-02-16 14:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:12.873799545 +0000 UTC m=+227.058776611" watchObservedRunningTime="2026-02-16 14:57:12.898499644 +0000 UTC m=+227.083476710" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.204435 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.209638 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284724 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284773 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284801 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284851 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284871 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284908 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284939 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") pod \"47c8a460-a52e-4669-bce1-28110d7d1d84\" (UID: \"47c8a460-a52e-4669-bce1-28110d7d1d84\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.284958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") pod \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\" (UID: \"d37fdcf5-d38d-4ee6-a395-67c634cc101d\") " Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286210 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca" (OuterVolumeSpecName: "client-ca") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286477 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config" (OuterVolumeSpecName: "config") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286599 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286503 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config" (OuterVolumeSpecName: "config") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.286529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca" (OuterVolumeSpecName: "client-ca") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.291277 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87" (OuterVolumeSpecName: "kube-api-access-64c87") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "kube-api-access-64c87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.291932 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt" (OuterVolumeSpecName: "kube-api-access-fz7vt") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "kube-api-access-fz7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.292651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47c8a460-a52e-4669-bce1-28110d7d1d84" (UID: "47c8a460-a52e-4669-bce1-28110d7d1d84"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.293469 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d37fdcf5-d38d-4ee6-a395-67c634cc101d" (UID: "d37fdcf5-d38d-4ee6-a395-67c634cc101d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.385994 4705 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386034 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d37fdcf5-d38d-4ee6-a395-67c634cc101d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386044 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386053 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386063 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64c87\" (UniqueName: \"kubernetes.io/projected/d37fdcf5-d38d-4ee6-a395-67c634cc101d-kube-api-access-64c87\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386076 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7vt\" (UniqueName: \"kubernetes.io/projected/47c8a460-a52e-4669-bce1-28110d7d1d84-kube-api-access-fz7vt\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386084 4705 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47c8a460-a52e-4669-bce1-28110d7d1d84-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386094 4705 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d37fdcf5-d38d-4ee6-a395-67c634cc101d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.386102 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c8a460-a52e-4669-bce1-28110d7d1d84-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" event={"ID":"47c8a460-a52e-4669-bce1-28110d7d1d84","Type":"ContainerDied","Data":"2795e4c2d0c5923a3f989bfa35226c0cc3e322f0d070cbb2c8ce6d68541e7796"} Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851777 4705 scope.go:117] "RemoveContainer" containerID="4f5bc87283404718ae7ce1ae59ea6deaba46f838300a564d98b5062b5e5e814b" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.851779 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.854678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" event={"ID":"d37fdcf5-d38d-4ee6-a395-67c634cc101d","Type":"ContainerDied","Data":"11b61afb5226e06f41df7b72351de58846518f769a70308755703e88f42cb5ed"} Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.854823 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b958878b7-5qdcz" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.877223 4705 scope.go:117] "RemoveContainer" containerID="d9ac65e45a94174f0bd15cdfaf08869840b4af6705995ff73dc2befc62108cb1" Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.886950 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.894229 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58d84ddb98-m6r9v"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.917666 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:13 crc kubenswrapper[4705]: I0216 14:57:13.921216 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b958878b7-5qdcz"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317063 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:14 crc kubenswrapper[4705]: E0216 14:57:14.317441 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: E0216 14:57:14.317469 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317479 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317585 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" containerName="route-controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.317602 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" containerName="controller-manager" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.318138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320562 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320708 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.320837 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.321006 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.321095 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.322875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.323166 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.324652 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.335848 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.350384 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.351206 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.351454 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.352851 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.354496 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.359968 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.361392 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.366149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.399881 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400465 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.400927 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.401038 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.401101 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.430620 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c8a460-a52e-4669-bce1-28110d7d1d84" path="/var/lib/kubelet/pods/47c8a460-a52e-4669-bce1-28110d7d1d84/volumes" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.431729 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37fdcf5-d38d-4ee6-a395-67c634cc101d" path="/var/lib/kubelet/pods/d37fdcf5-d38d-4ee6-a395-67c634cc101d/volumes" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503758 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.503857 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.504394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-config\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.505083 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-client-ca\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.506230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-client-ca\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.506581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-proxy-ca-bundles\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.507216 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10d74ea2-e93d-4c5b-b659-61bce2500a4d-config\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.512063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-serving-cert\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.518922 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqj5\" (UniqueName: \"kubernetes.io/projected/10d74ea2-e93d-4c5b-b659-61bce2500a4d-kube-api-access-2xqj5\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.521870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10d74ea2-e93d-4c5b-b659-61bce2500a4d-serving-cert\") pod \"controller-manager-7bfdf56d56-2x59h\" (UID: \"10d74ea2-e93d-4c5b-b659-61bce2500a4d\") " pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.527830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmd85\" (UniqueName: \"kubernetes.io/projected/f2ceaa67-4f36-4622-88ab-c2d5413c57f6-kube-api-access-wmd85\") pod \"route-controller-manager-677f9fd894-2hvcq\" (UID: \"f2ceaa67-4f36-4622-88ab-c2d5413c57f6\") " pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.649751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.650927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.872459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq"] Feb 16 14:57:14 crc kubenswrapper[4705]: W0216 14:57:14.881723 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2ceaa67_4f36_4622_88ab_c2d5413c57f6.slice/crio-18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24 WatchSource:0}: Error finding container 18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24: Status 404 returned error can't find the container with id 18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24 Feb 16 14:57:14 crc kubenswrapper[4705]: I0216 14:57:14.932170 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bfdf56d56-2x59h"] Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.871714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" event={"ID":"10d74ea2-e93d-4c5b-b659-61bce2500a4d","Type":"ContainerStarted","Data":"f7756b3b41b2751a91ed206e5bfc85f605958d1fa290c9e840cd6d51cfa383d1"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.872346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" event={"ID":"10d74ea2-e93d-4c5b-b659-61bce2500a4d","Type":"ContainerStarted","Data":"f10b2d2413920cfffbdff891cf9134716df1497e964e0d119be7b52d0fe2a774"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.872468 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874100 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" event={"ID":"f2ceaa67-4f36-4622-88ab-c2d5413c57f6","Type":"ContainerStarted","Data":"40b2b93e397cda8a5945d848024a56a0203fe4fb31672e3655442a0fe5eba83b"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" event={"ID":"f2ceaa67-4f36-4622-88ab-c2d5413c57f6","Type":"ContainerStarted","Data":"18952a77e126116b9a62c893eba03af44ee1224bc5a66002e27322486cc69b24"} Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.874378 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.878746 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.880835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.896919 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bfdf56d56-2x59h" podStartSLOduration=3.896903311 podStartE2EDuration="3.896903311s" podCreationTimestamp="2026-02-16 14:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:15.893241648 +0000 UTC m=+230.078218734" watchObservedRunningTime="2026-02-16 14:57:15.896903311 +0000 UTC m=+230.081880397" Feb 16 14:57:15 crc kubenswrapper[4705]: I0216 14:57:15.919676 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677f9fd894-2hvcq" podStartSLOduration=3.919655535 podStartE2EDuration="3.919655535s" podCreationTimestamp="2026-02-16 14:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:15.917080472 +0000 UTC m=+230.102057558" watchObservedRunningTime="2026-02-16 14:57:15.919655535 +0000 UTC m=+230.104632611" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.175249 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177191 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177234 4705 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177335 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177737 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177800 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177863 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177906 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.177939 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" gracePeriod=15 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.180904 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.182969 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183049 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183073 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183087 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183115 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183140 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183158 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183173 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183200 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183213 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183230 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183243 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: E0216 14:57:22.183261 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183273 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183552 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183579 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183604 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183620 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183645 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.183666 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.319978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320486 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320514 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320759 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.320913 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422419 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422421 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422468 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422444 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422396 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422529 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.422546 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.929734 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.931232 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932111 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932138 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932149 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932158 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" exitCode=2 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.932229 4705 scope.go:117] "RemoveContainer" containerID="50356771f75819816169133913d6add0703ca9c9c3923652a2f48df1a9cb8f0d" Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.934350 4705 generic.go:334] "Generic (PLEG): container finished" podID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerID="8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc" exitCode=0 Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.934395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerDied","Data":"8a30aa7cf7e0f680c219c737827d7511374124ec3b0f2c971c1e7c9989007cdc"} Feb 16 14:57:22 crc kubenswrapper[4705]: I0216 14:57:22.935287 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:23 crc kubenswrapper[4705]: I0216 14:57:23.942251 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.452766 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.454525 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550208 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") pod \"6b45f345-45b8-4e21-a4da-46e4d43e429e\" (UID: \"6b45f345-45b8-4e21-a4da-46e4d43e429e\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock" (OuterVolumeSpecName: "var-lock") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.550908 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.557971 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6b45f345-45b8-4e21-a4da-46e4d43e429e" (UID: "6b45f345-45b8-4e21-a4da-46e4d43e429e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651330 4705 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651723 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6b45f345-45b8-4e21-a4da-46e4d43e429e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.651732 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b45f345-45b8-4e21-a4da-46e4d43e429e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.655870 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.656748 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.657195 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.657453 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752273 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752300 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752443 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752583 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752597 4705 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.752607 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.854143 4705 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.974217 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.978828 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" exitCode=0 Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.978968 4705 scope.go:117] "RemoveContainer" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.979276 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6b45f345-45b8-4e21-a4da-46e4d43e429e","Type":"ContainerDied","Data":"87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014"} Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982731 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:57:24 crc kubenswrapper[4705]: I0216 14:57:24.982737 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f69726e97d736145a6290998a64cafeb28b2d4eb89eea3b13c3f0bb2801014" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.002351 4705 scope.go:117] "RemoveContainer" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.015545 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.016109 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.019811 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.020289 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.025096 4705 scope.go:117] "RemoveContainer" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.040511 4705 scope.go:117] "RemoveContainer" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.056608 4705 scope.go:117] "RemoveContainer" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.072337 4705 scope.go:117] "RemoveContainer" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.095694 4705 scope.go:117] "RemoveContainer" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.096469 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": container with ID starting with e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852 not found: ID does not exist" containerID="e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.096574 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852"} err="failed to get container status \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": rpc error: code = NotFound desc = could not find container \"e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852\": container with ID starting with e8072625667780b731a23c1e93b2c413a688cc7c10499b63405194697ce8c852 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.096678 4705 scope.go:117] "RemoveContainer" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.097101 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": container with ID starting with d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6 not found: ID does not exist" containerID="d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6"} err="failed to get container status \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": rpc error: code = NotFound desc = could not find container \"d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6\": container with ID starting with d98b08afcd799d2262ca73e58c63679d914efd04772bdb52b5206a02316ae2e6 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097195 4705 scope.go:117] "RemoveContainer" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.097701 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": container with ID starting with c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373 not found: ID does not exist" containerID="c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097794 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373"} err="failed to get container status \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": rpc error: code = NotFound desc = could not find container \"c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373\": container with ID starting with c888506b07da9ca9bff4994de61090b890bbbe20a3bd689df7db84d0e6c6f373 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.097866 4705 scope.go:117] "RemoveContainer" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.098307 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": container with ID starting with 7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9 not found: ID does not exist" containerID="7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.098392 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9"} err="failed to get container status \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": rpc error: code = NotFound desc = could not find container \"7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9\": container with ID starting with 7ccf082c149ee82ecff8a8ba3cc5415540742b1677f0dbcff6133a52de2afee9 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.098418 4705 scope.go:117] "RemoveContainer" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.098909 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": container with ID starting with 56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1 not found: ID does not exist" containerID="56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099039 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1"} err="failed to get container status \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": rpc error: code = NotFound desc = could not find container \"56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1\": container with ID starting with 56d68f43646c3ea706744a14ffb36a8f934cd1b2279265202b216806e7b3a0c1 not found: ID does not exist" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099173 4705 scope.go:117] "RemoveContainer" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: E0216 14:57:25.099598 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": container with ID starting with f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9 not found: ID does not exist" containerID="f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9" Feb 16 14:57:25 crc kubenswrapper[4705]: I0216 14:57:25.099635 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9"} err="failed to get container status \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": rpc error: code = NotFound desc = could not find container \"f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9\": container with ID starting with f76889d4c75fdf119454920be441d52eb3b372c5fd7002bcfa2602f766652fe9 not found: ID does not exist" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.424419 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.425620 4705 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.432570 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.718215 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.719251 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.719812 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.720546 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.720895 4705 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:26 crc kubenswrapper[4705]: I0216 14:57:26.720937 4705 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.721317 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Feb 16 14:57:26 crc kubenswrapper[4705]: E0216 14:57:26.922238 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.244735 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:27 crc kubenswrapper[4705]: I0216 14:57:27.245126 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:27 crc kubenswrapper[4705]: W0216 14:57:27.263883 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5 WatchSource:0}: Error finding container a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5: Status 404 returned error can't find the container with id a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5 Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.267453 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1fd55651078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,LastTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:57:27 crc kubenswrapper[4705]: E0216 14:57:27.326926 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.001582 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c"} Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.002132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a428ed8eacbcdca81937f2327f58b1b538d4838303fe842b39080454eb5ab8e5"} Feb 16 14:57:28 crc kubenswrapper[4705]: E0216 14:57:28.002897 4705 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:28 crc kubenswrapper[4705]: I0216 14:57:28.002989 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:28 crc kubenswrapper[4705]: E0216 14:57:28.129285 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Feb 16 14:57:29 crc kubenswrapper[4705]: E0216 14:57:29.731249 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Feb 16 14:57:30 crc kubenswrapper[4705]: E0216 14:57:30.032916 4705 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1fd55651078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,LastTimestamp:2026-02-16 14:57:27.26702092 +0000 UTC m=+241.451997996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.418899 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.419498 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440009 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440041 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:32 crc kubenswrapper[4705]: E0216 14:57:32.440311 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: I0216 14:57:32.440820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:32 crc kubenswrapper[4705]: E0216 14:57:32.932881 4705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="6.4s" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036269 4705 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7e932b61fe25a189a4870f79d3277397e7e7646a88406dff42273f87ffe56204" exitCode=0 Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036323 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7e932b61fe25a189a4870f79d3277397e7e7646a88406dff42273f87ffe56204"} Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5392b6374769c9d76a1471cc24a055ec975e57d9ba591533996058c3caa92bee"} Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036749 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.036779 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:33 crc kubenswrapper[4705]: I0216 14:57:33.037008 4705 status_manager.go:851] "Failed to get status for pod" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Feb 16 14:57:33 crc kubenswrapper[4705]: E0216 14:57:33.037149 4705 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.054627 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b01b038538dec6d83bad14b3cefe007b6a9bcd90e4678d675c93fd0baaa9744"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"272a67bc66194163d69fd3ad217ce215708b666adffad8ef4256d1a1abd0d19c"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056184 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f666f38a0b77bb446295d2fd9b5790757f35f127ca82ceac71ac5f41f356bdde"} Feb 16 14:57:34 crc kubenswrapper[4705]: I0216 14:57:34.056283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b846ac25d2bd21e73f2aca71805cff5f016db071b9bb6b7e3bbfa624db1f5bf"} Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.070805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b80e563543ae5ed80b7a022a4d8081deb175876726a96243e8a0084ff1f2074"} Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.071774 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.072028 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:35 crc kubenswrapper[4705]: I0216 14:57:35.072158 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.088527 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.089726 4705 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9" exitCode=1 Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.089804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9"} Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.090986 4705 scope.go:117] "RemoveContainer" containerID="a61e2b6baeb217f6a5dd86dfff4382ee06a491948b517568b80c31733cb45ef9" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.441161 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.441238 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.449318 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:37 crc kubenswrapper[4705]: I0216 14:57:37.583597 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:38 crc kubenswrapper[4705]: I0216 14:57:38.136688 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:38 crc kubenswrapper[4705]: I0216 14:57:38.137261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb9d3cb6732a77878233522dacb3ee3c5d14e1c4ab14cc9b0d5f49c55a000db0"} Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.086737 4705 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.138693 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.150283 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.150351 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.161444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:40 crc kubenswrapper[4705]: I0216 14:57:40.171919 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.155011 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.155068 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:41 crc kubenswrapper[4705]: I0216 14:57:41.159618 4705 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8b7096f-55d2-4296-a1cd-f39b33fcc539" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.244055 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.249010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.773554 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.775614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:57:46 crc kubenswrapper[4705]: I0216 14:57:46.931449 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.019624 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.023670 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.151402 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.191706 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.193072 4705 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.197695 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.200903 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201075 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201506 4705 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.201536 4705 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="60a9a247-f180-4ddd-8577-40f4cfa074da" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.207097 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.248785 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.250999 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.251157 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=7.251119761 podStartE2EDuration="7.251119761s" podCreationTimestamp="2026-02-16 14:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:47.249183746 +0000 UTC m=+261.434160852" watchObservedRunningTime="2026-02-16 14:57:47.251119761 +0000 UTC m=+261.436096877" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.292941 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.293350 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.463961 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.922604 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.959880 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.985568 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:57:47 crc kubenswrapper[4705]: I0216 14:57:47.993328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.047066 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.098576 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.235098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.343486 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.413939 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.415843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.435969 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.455364 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.523835 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.613554 4705 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.641207 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.665884 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.725179 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.725817 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.819211 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:57:48 crc kubenswrapper[4705]: I0216 14:57:48.827903 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.130822 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.170238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.359575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.402419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.420831 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.431708 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.510804 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.654500 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.686800 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:57:49 crc kubenswrapper[4705]: I0216 14:57:49.971458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.162475 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.195921 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.237222 4705 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.237526 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" gracePeriod=5 Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.302911 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.438303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.449821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.450839 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.598519 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.638511 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.712149 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.732302 4705 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.780414 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.919780 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:57:50 crc kubenswrapper[4705]: I0216 14:57:50.955002 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.003301 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.019002 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.049874 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.178566 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.202132 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.277059 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.327197 4705 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.371961 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.408653 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.443606 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.676153 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.791960 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.846548 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.862975 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:57:51 crc kubenswrapper[4705]: I0216 14:57:51.875036 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.065656 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.500761 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.532875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.739761 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:57:52 crc kubenswrapper[4705]: I0216 14:57:52.785053 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.085445 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.418194 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.738559 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.933883 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:57:53 crc kubenswrapper[4705]: I0216 14:57:53.949062 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.023804 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.220058 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.425323 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.425336 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.440240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.587641 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:57:54 crc kubenswrapper[4705]: I0216 14:57:54.743676 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.170642 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.215053 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.316573 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.533640 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.637307 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.829991 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.830502 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862800 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862897 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.862950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863046 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863162 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863185 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863336 4705 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863350 4705 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863359 4705 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.863384 4705 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.874119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:55 crc kubenswrapper[4705]: I0216 14:57:55.964585 4705 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.039539 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.093622 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.251951 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252062 4705 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" exitCode=137 Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252157 4705 scope.go:117] "RemoveContainer" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.252436 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.275268 4705 scope.go:117] "RemoveContainer" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: E0216 14:57:56.275761 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": container with ID starting with 75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c not found: ID does not exist" containerID="75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.275798 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c"} err="failed to get container status \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": rpc error: code = NotFound desc = could not find container \"75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c\": container with ID starting with 75891b267ed95d22fca6a6eecbafac074b219d9c2ba33f52c327658407fcfa7c not found: ID does not exist" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.370399 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.427196 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.435261 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.447049 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.505030 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.568871 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.600534 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.625890 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.634557 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.757755 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:57:56 crc kubenswrapper[4705]: I0216 14:57:56.935636 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.182631 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.253757 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.279109 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.280990 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.290530 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.336656 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.375645 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.488923 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.584510 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.631852 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.719020 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.796837 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.807428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.819148 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.905683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.917222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:57:57 crc kubenswrapper[4705]: I0216 14:57:57.960711 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.158793 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.180286 4705 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.213799 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.320198 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.340518 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.407464 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.520890 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.594351 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.596109 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.614604 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.659254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.684758 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.715630 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:57:58 crc kubenswrapper[4705]: I0216 14:57:58.852438 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.010237 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.111089 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.207662 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.333980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.453714 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.513754 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.520977 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.664931 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.675567 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.716839 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.791959 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.817889 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.840664 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.946947 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:57:59 crc kubenswrapper[4705]: I0216 14:57:59.991698 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.002636 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.015976 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.100659 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.101077 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.136786 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.213142 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.318551 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.371813 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.419942 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.489763 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.563775 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.563872 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.613973 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.620276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.661238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.747472 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.748617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.870426 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.882280 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:58:00 crc kubenswrapper[4705]: I0216 14:58:00.916401 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.055155 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.058872 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.066832 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.157912 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.194897 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.459451 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.550034 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.569477 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.602703 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.621138 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.674137 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.784787 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.800000 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.804244 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.869517 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:58:01 crc kubenswrapper[4705]: I0216 14:58:01.954736 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.041925 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.046839 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.348953 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.390679 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.422099 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.459538 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.494351 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.522106 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.552343 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.629354 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.675744 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.835710 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:58:02 crc kubenswrapper[4705]: I0216 14:58:02.915576 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.047335 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.119204 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.240895 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.265947 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.312983 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.372590 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.421159 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.582303 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.678552 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.741641 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.774822 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:58:03 crc kubenswrapper[4705]: I0216 14:58:03.989927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.105015 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.250680 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.395295 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.550482 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.662318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.682980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.711492 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.833905 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:58:04 crc kubenswrapper[4705]: I0216 14:58:04.989821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.030773 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.242421 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.562974 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.610352 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.665977 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.873772 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:58:05 crc kubenswrapper[4705]: I0216 14:58:05.985908 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.086026 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.095400 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.395160 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.455689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.518690 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.520095 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.536577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.584107 4705 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.608163 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.695314 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.779895 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.811727 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:58:06 crc kubenswrapper[4705]: I0216 14:58:06.923725 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.062429 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.148980 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.211735 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.435942 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.534852 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.789334 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.843276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.844719 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:58:07 crc kubenswrapper[4705]: I0216 14:58:07.881456 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:58:08 crc kubenswrapper[4705]: I0216 14:58:08.554736 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:58:09 crc kubenswrapper[4705]: I0216 14:58:09.203816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:58:10 crc kubenswrapper[4705]: I0216 14:58:10.006447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.776187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.777087 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sj9bt" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" containerID="cri-o://d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.781077 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.781287 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvxpr" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" containerID="cri-o://d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.787435 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.787637 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" containerID="cri-o://004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.811278 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.811569 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gmh5s" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" containerID="cri-o://3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.827947 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.828498 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkkgp" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" containerID="cri-o://44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" gracePeriod=30 Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.850911 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:11 crc kubenswrapper[4705]: E0216 14:58:11.851490 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851504 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: E0216 14:58:11.851518 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851525 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851617 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851628 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b45f345-45b8-4e21-a4da-46e4d43e429e" containerName="installer" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.851988 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.870619 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918725 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918780 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:11 crc kubenswrapper[4705]: I0216 14:58:11.918818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.020889 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.023905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/88197577-5157-4d99-9813-eb3173530b4f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.028947 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/88197577-5157-4d99-9813-eb3173530b4f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.045810 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jv8\" (UniqueName: \"kubernetes.io/projected/88197577-5157-4d99-9813-eb3173530b4f-kube-api-access-k7jv8\") pod \"marketplace-operator-79b997595-ghmpd\" (UID: \"88197577-5157-4d99-9813-eb3173530b4f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.189461 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.274651 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.324609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.324733 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.325036 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") pod \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\" (UID: \"5621ad75-f2c2-44c8-aff8-ed4da48fc415\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.326484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.331485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd" (OuterVolumeSpecName: "kube-api-access-qc5bd") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "kube-api-access-qc5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.336519 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5621ad75-f2c2-44c8-aff8-ed4da48fc415" (UID: "5621ad75-f2c2-44c8-aff8-ed4da48fc415"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383758 4705 generic.go:334] "Generic (PLEG): container finished" podID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerDied","Data":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383859 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383894 4705 scope.go:117] "RemoveContainer" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.383878 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bbtvp" event={"ID":"5621ad75-f2c2-44c8-aff8-ed4da48fc415","Type":"ContainerDied","Data":"faa1e5018382734db35e1205c39088b34faea391ec6e62672b88da102016cb47"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.390251 4705 generic.go:334] "Generic (PLEG): container finished" podID="895390cd-d0f8-46da-a932-6cccd295f203" containerID="d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.390330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.393964 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerID="d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.394044 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.397865 4705 generic.go:334] "Generic (PLEG): container finished" podID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerID="44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.397910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.421512 4705 scope.go:117] "RemoveContainer" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436340 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436401 4705 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5621ad75-f2c2-44c8-aff8-ed4da48fc415-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436411 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc5bd\" (UniqueName: \"kubernetes.io/projected/5621ad75-f2c2-44c8-aff8-ed4da48fc415-kube-api-access-qc5bd\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: E0216 14:58:12.436718 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": container with ID starting with 004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61 not found: ID does not exist" containerID="004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.436749 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61"} err="failed to get container status \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": rpc error: code = NotFound desc = could not find container \"004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61\": container with ID starting with 004c79a95dd5e5d0415346cef68ca51670d560a1fd8b41f3ba9047ce6869df61 not found: ID does not exist" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.438209 4705 generic.go:334] "Generic (PLEG): container finished" podID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerID="3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" exitCode=0 Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.452006 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.452039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870"} Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.463317 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bbtvp"] Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.476319 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.484562 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.486410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.509430 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559790 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.559833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560062 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560090 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560167 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") pod \"112518bc-4caf-44c2-8920-185e2e90cc9b\" (UID: \"112518bc-4caf-44c2-8920-185e2e90cc9b\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560189 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") pod \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\" (UID: \"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.560617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") pod \"c8efc871-44f0-4bbd-b639-6adaee23319a\" (UID: \"c8efc871-44f0-4bbd-b639-6adaee23319a\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7" (OuterVolumeSpecName: "kube-api-access-x9mn7") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "kube-api-access-x9mn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564802 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities" (OuterVolumeSpecName: "utilities") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564799 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw" (OuterVolumeSpecName: "kube-api-access-lfjqw") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "kube-api-access-lfjqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.564915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities" (OuterVolumeSpecName: "utilities") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.565930 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82" (OuterVolumeSpecName: "kube-api-access-hmb82") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "kube-api-access-hmb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.567118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities" (OuterVolumeSpecName: "utilities") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.617166 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" (UID: "2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.631422 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8efc871-44f0-4bbd-b639-6adaee23319a" (UID: "c8efc871-44f0-4bbd-b639-6adaee23319a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661545 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661592 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661630 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") pod \"895390cd-d0f8-46da-a932-6cccd295f203\" (UID: \"895390cd-d0f8-46da-a932-6cccd295f203\") " Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661925 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmb82\" (UniqueName: \"kubernetes.io/projected/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-kube-api-access-hmb82\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661944 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661953 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mn7\" (UniqueName: \"kubernetes.io/projected/c8efc871-44f0-4bbd-b639-6adaee23319a-kube-api-access-x9mn7\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661961 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8efc871-44f0-4bbd-b639-6adaee23319a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661970 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661978 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfjqw\" (UniqueName: \"kubernetes.io/projected/112518bc-4caf-44c2-8920-185e2e90cc9b-kube-api-access-lfjqw\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661986 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.661994 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.662484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities" (OuterVolumeSpecName: "utilities") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.664125 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w" (OuterVolumeSpecName: "kube-api-access-7bn7w") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "kube-api-access-7bn7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.708836 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "112518bc-4caf-44c2-8920-185e2e90cc9b" (UID: "112518bc-4caf-44c2-8920-185e2e90cc9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.726391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "895390cd-d0f8-46da-a932-6cccd295f203" (UID: "895390cd-d0f8-46da-a932-6cccd295f203"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762599 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bn7w\" (UniqueName: \"kubernetes.io/projected/895390cd-d0f8-46da-a932-6cccd295f203-kube-api-access-7bn7w\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762634 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112518bc-4caf-44c2-8920-185e2e90cc9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762642 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.762689 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/895390cd-d0f8-46da-a932-6cccd295f203-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:58:12 crc kubenswrapper[4705]: I0216 14:58:12.764514 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ghmpd"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446089 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxpr" event={"ID":"895390cd-d0f8-46da-a932-6cccd295f203","Type":"ContainerDied","Data":"3d2f0059d40b4313cb2192bb0c8318a3e59e5de2da0badc178590ca35c5bf347"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446586 4705 scope.go:117] "RemoveContainer" containerID="d33c37236673d66e2901d64db78200c763977b99a1686820a64dbf3d5e56fb7b" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.446129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxpr" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.449836 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sj9bt" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.449966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sj9bt" event={"ID":"c8efc871-44f0-4bbd-b639-6adaee23319a","Type":"ContainerDied","Data":"9c38ddd230468ed8cd1a56ea6b741c62c5cf9bb056f3dfa31abce6f0108cc3e2"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.456236 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkkgp" event={"ID":"112518bc-4caf-44c2-8920-185e2e90cc9b","Type":"ContainerDied","Data":"5ca975ac41d20405951f16e100085714e84618ea7435589dc42061daef0e3c0d"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.456297 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkkgp" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.457885 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" event={"ID":"88197577-5157-4d99-9813-eb3173530b4f","Type":"ContainerStarted","Data":"38989531c1d423921e4d11207bd66d821a0d3882fbdede6a5d7ccde2f9598b95"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.457910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" event={"ID":"88197577-5157-4d99-9813-eb3173530b4f","Type":"ContainerStarted","Data":"c160938cc4a9aeb02b8fb0dcd8866dc1e6d1972641bc31e88fe4f8e47c6d676f"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.458476 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.461869 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh5s" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.462411 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh5s" event={"ID":"2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788","Type":"ContainerDied","Data":"9e7c06275441e0dc9753d3e97f80b0b2fa0173ed74928bf3711fd998b37c0d36"} Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.462648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.486855 4705 scope.go:117] "RemoveContainer" containerID="142e52fe965dccc8447bce8b51d66eb18e77b2fbf8857b7b9eaf42bda581cb4b" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.495871 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ghmpd" podStartSLOduration=2.495853057 podStartE2EDuration="2.495853057s" podCreationTimestamp="2026-02-16 14:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:58:13.495766984 +0000 UTC m=+287.680744090" watchObservedRunningTime="2026-02-16 14:58:13.495853057 +0000 UTC m=+287.680830133" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.527310 4705 scope.go:117] "RemoveContainer" containerID="47dd83c51982eee0fc8944965237e1d7e630e2a9915e5bf23151e62a40008638" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.528668 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.538872 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh5s"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.543495 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.549567 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sj9bt"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.553806 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.559706 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvxpr"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.561427 4705 scope.go:117] "RemoveContainer" containerID="d21d87e204d7c7dd1f5e531f27be7d67418c7a9af9ade838a90a03b259c16e3c" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.564523 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.566624 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkkgp"] Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.576581 4705 scope.go:117] "RemoveContainer" containerID="73ba943d06af17d02c46446ace18358f2e018622fa9d08256b673061932ee618" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.591481 4705 scope.go:117] "RemoveContainer" containerID="4e44853d8ab25d2d5626a88e1f0b8ee2df4324e46ca5431c6ba290df4560e9f2" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.605832 4705 scope.go:117] "RemoveContainer" containerID="44b2753298e481a1af81ac801ec3b5340db0dc87e754c807e8d3e4dee8fa47ff" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.622927 4705 scope.go:117] "RemoveContainer" containerID="bc3f70071f15f7c623a394166db10d02b47e2458284d6c7b790a1b750e33d8c7" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.637537 4705 scope.go:117] "RemoveContainer" containerID="2d8d5694b911f4b43d4018735e7222f174757c80b72ed579b3b1544c211daf10" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.651513 4705 scope.go:117] "RemoveContainer" containerID="3cb8479b4305f364c5f6ead421d66ba76fae3e3cb48c375431bc5f1d1839a870" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.666399 4705 scope.go:117] "RemoveContainer" containerID="3bf941b0ceb33444ebc5dd947fedfa63976db0f6ca005483c4d7b0a244761dba" Feb 16 14:58:13 crc kubenswrapper[4705]: I0216 14:58:13.682471 4705 scope.go:117] "RemoveContainer" containerID="d4b9a5df6e9f03bb94d5e2fb0f0b632bf65e0617fc3ef91575b6942f876f86c6" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.425497 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" path="/var/lib/kubelet/pods/112518bc-4caf-44c2-8920-185e2e90cc9b/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.426072 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" path="/var/lib/kubelet/pods/2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.426661 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" path="/var/lib/kubelet/pods/5621ad75-f2c2-44c8-aff8-ed4da48fc415/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.427166 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895390cd-d0f8-46da-a932-6cccd295f203" path="/var/lib/kubelet/pods/895390cd-d0f8-46da-a932-6cccd295f203/volumes" Feb 16 14:58:14 crc kubenswrapper[4705]: I0216 14:58:14.427742 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" path="/var/lib/kubelet/pods/c8efc871-44f0-4bbd-b639-6adaee23319a/volumes" Feb 16 14:58:26 crc kubenswrapper[4705]: I0216 14:58:26.242881 4705 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.031053 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032260 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032285 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032316 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032334 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032366 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032425 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032443 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032454 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032468 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032503 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032516 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032562 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-content" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032595 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032608 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032626 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032655 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032667 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032683 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032695 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032712 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032725 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: E0216 14:58:43.032739 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032751 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="extract-utilities" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032923 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="895390cd-d0f8-46da-a932-6cccd295f203" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032941 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdbb7bf-9a75-4908-adfc-0ea4ce5b5788" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032965 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="112518bc-4caf-44c2-8920-185e2e90cc9b" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.032984 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5621ad75-f2c2-44c8-aff8-ed4da48fc415" containerName="marketplace-operator" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.033002 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8efc871-44f0-4bbd-b639-6adaee23319a" containerName="registry-server" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.033812 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.036889 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.037985 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.038903 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.039674 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.040972 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.050674 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198618 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.198746 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299800 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.299868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.300955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/72ebc12e-e218-4611-bf0f-792c7a949828-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.317278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/72ebc12e-e218-4611-bf0f-792c7a949828-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.323967 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpr2t\" (UniqueName: \"kubernetes.io/projected/72ebc12e-e218-4611-bf0f-792c7a949828-kube-api-access-xpr2t\") pod \"cluster-monitoring-operator-6d5b84845-vp6sl\" (UID: \"72ebc12e-e218-4611-bf0f-792c7a949828\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.362915 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" Feb 16 14:58:43 crc kubenswrapper[4705]: I0216 14:58:43.888973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl"] Feb 16 14:58:44 crc kubenswrapper[4705]: I0216 14:58:44.660349 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" event={"ID":"72ebc12e-e218-4611-bf0f-792c7a949828","Type":"ContainerStarted","Data":"3374959813bb2a88c4ad5f65a202394edea8c86c8f2d291e2b483c6e0ffba088"} Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.602030 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.603196 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.605918 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.613140 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.672989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" event={"ID":"72ebc12e-e218-4611-bf0f-792c7a949828","Type":"ContainerStarted","Data":"1ba5386d8196ec9b0269d25850eadefeb5d34f76dcf71c0f4115c0afc09dab84"} Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.755019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.856699 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.871884 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d8e34ca0-dbbd-4076-b891-9d44df6973cc-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-8tn4j\" (UID: \"d8e34ca0-dbbd-4076-b891-9d44df6973cc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:46 crc kubenswrapper[4705]: I0216 14:58:46.919923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.390405 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-vp6sl" podStartSLOduration=2.362070325 podStartE2EDuration="4.390350239s" podCreationTimestamp="2026-02-16 14:58:43 +0000 UTC" firstStartedPulling="2026-02-16 14:58:43.903105679 +0000 UTC m=+318.088082765" lastFinishedPulling="2026-02-16 14:58:45.931385583 +0000 UTC m=+320.116362679" observedRunningTime="2026-02-16 14:58:46.695630914 +0000 UTC m=+320.880607990" watchObservedRunningTime="2026-02-16 14:58:47.390350239 +0000 UTC m=+321.575327345" Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.395332 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j"] Feb 16 14:58:47 crc kubenswrapper[4705]: W0216 14:58:47.397132 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e34ca0_dbbd_4076_b891_9d44df6973cc.slice/crio-11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44 WatchSource:0}: Error finding container 11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44: Status 404 returned error can't find the container with id 11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44 Feb 16 14:58:47 crc kubenswrapper[4705]: I0216 14:58:47.684344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" event={"ID":"d8e34ca0-dbbd-4076-b891-9d44df6973cc","Type":"ContainerStarted","Data":"11c3d1f87e9e4e7fa5195ed963dd5d1271f5f155055ae7f604745c4ccb905b44"} Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.700844 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" event={"ID":"d8e34ca0-dbbd-4076-b891-9d44df6973cc","Type":"ContainerStarted","Data":"e00e7afcb86d2918efeb1ff3ccac1e146178821457c1c91ea59828d6f5be9aea"} Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.701483 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.714942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" Feb 16 14:58:49 crc kubenswrapper[4705]: I0216 14:58:49.727075 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-8tn4j" podStartSLOduration=2.379565092 podStartE2EDuration="3.727044115s" podCreationTimestamp="2026-02-16 14:58:46 +0000 UTC" firstStartedPulling="2026-02-16 14:58:47.401633268 +0000 UTC m=+321.586610374" lastFinishedPulling="2026-02-16 14:58:48.749112321 +0000 UTC m=+322.934089397" observedRunningTime="2026-02-16 14:58:49.719725038 +0000 UTC m=+323.904702144" watchObservedRunningTime="2026-02-16 14:58:49.727044115 +0000 UTC m=+323.912021231" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.668639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.669640 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.672713 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.674393 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.674467 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.686383 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813526 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813628 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.813684 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.915295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.918012 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.927006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.927960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.945022 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thz9j\" (UniqueName: \"kubernetes.io/projected/d2aa4e25-93a7-4e46-85b8-6302c48f8b5b-kube-api-access-thz9j\") pod \"prometheus-operator-db54df47d-tnfwx\" (UID: \"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:50 crc kubenswrapper[4705]: I0216 14:58:50.988208 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" Feb 16 14:58:51 crc kubenswrapper[4705]: I0216 14:58:51.502176 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-tnfwx"] Feb 16 14:58:51 crc kubenswrapper[4705]: I0216 14:58:51.734359 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"5894ef1340f11e00d73f5114c91786a42fe5b8be889108ce8e69480fd6d351f5"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.797850 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"f089d0e7440aab03f2cc836492e4d9c838d8ed93045cdc48d6db5966e686b586"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.798341 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" event={"ID":"d2aa4e25-93a7-4e46-85b8-6302c48f8b5b","Type":"ContainerStarted","Data":"af6035988030a51a23b058e69ceedbd75a510f60667ec5f36754552e903becb6"} Feb 16 14:58:53 crc kubenswrapper[4705]: I0216 14:58:53.821595 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-tnfwx" podStartSLOduration=2.192105362 podStartE2EDuration="3.821567312s" podCreationTimestamp="2026-02-16 14:58:50 +0000 UTC" firstStartedPulling="2026-02-16 14:58:51.51665307 +0000 UTC m=+325.701630156" lastFinishedPulling="2026-02-16 14:58:53.14611503 +0000 UTC m=+327.331092106" observedRunningTime="2026-02-16 14:58:53.817709293 +0000 UTC m=+328.002686369" watchObservedRunningTime="2026-02-16 14:58:53.821567312 +0000 UTC m=+328.006544428" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.955720 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.957027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.958958 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.959097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 14:58:55 crc kubenswrapper[4705]: I0216 14:58:55.975149 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035283 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035349 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.035420 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.049584 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.050887 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.053262 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.053868 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.054650 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.061859 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.086848 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-6vxhj"] Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.087896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.090927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.091165 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137200 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137264 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137330 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137385 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137444 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137483 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137543 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137615 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.137676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.138820 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a10863da-bf1a-4f07-8ffc-4d05deba027a-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.144450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.146752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a10863da-bf1a-4f07-8ffc-4d05deba027a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.153087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv8bh\" (UniqueName: \"kubernetes.io/projected/a10863da-bf1a-4f07-8ffc-4d05deba027a-kube-api-access-vv8bh\") pod \"openshift-state-metrics-566fddb674-7xc7z\" (UID: \"a10863da-bf1a-4f07-8ffc-4d05deba027a\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239078 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239159 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239382 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239415 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239438 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239530 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-sys\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.239903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-root\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-wtmp\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240607 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-textfile\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.240771 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.241104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-metrics-client-ca\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.247833 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-tls\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.247982 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.248014 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.259521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnq8\" (UniqueName: \"kubernetes.io/projected/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-api-access-vqnq8\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.260719 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b0767c1-7dc6-4c17-baa7-34f91d1f7207-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qdnlj\" (UID: \"8b0767c1-7dc6-4c17-baa7-34f91d1f7207\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.262236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q5br\" (UniqueName: \"kubernetes.io/projected/5b3841cd-a0f0-481c-9a3e-4bee8df62db2-kube-api-access-4q5br\") pod \"node-exporter-6vxhj\" (UID: \"5b3841cd-a0f0-481c-9a3e-4bee8df62db2\") " pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.276358 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.365917 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" Feb 16 14:58:56 crc kubenswrapper[4705]: I0216 14:58:56.402751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6vxhj" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.710329 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.817993 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"8c601e41340687fe672754432c425e94df05fefb0f5e324452aef3aee109cc19"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:56.820897 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"f208ca3e03a0a79df1c645de98fd047e83e4aada3617908aca1b033402879990"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.155459 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.157601 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.160748 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.160809 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.163910 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.164818 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.165880 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.166047 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.166577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.170267 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.181497 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257222 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257300 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257488 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257513 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257531 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257565 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.257603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.341919 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj"] Feb 16 14:58:57 crc kubenswrapper[4705]: W0216 14:58:57.347317 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b0767c1_7dc6_4c17_baa7_34f91d1f7207.slice/crio-54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d WatchSource:0}: Error finding container 54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d: Status 404 returned error can't find the container with id 54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358292 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358329 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358397 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.358495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.359915 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.360911 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365291 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365613 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-out\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365701 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-config-volume\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.365958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.366218 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-web-config\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.366670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.367768 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.368053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8934da22-3ea4-4b0b-be02-6062165cdc7b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.371910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8934da22-3ea4-4b0b-be02-6062165cdc7b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.376771 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h98d9\" (UniqueName: \"kubernetes.io/projected/8934da22-3ea4-4b0b-be02-6062165cdc7b-kube-api-access-h98d9\") pod \"alertmanager-main-0\" (UID: \"8934da22-3ea4-4b0b-be02-6062165cdc7b\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.528735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.831446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"54142d8b9b01a6c37544c98add454a4c36d67874cf7cc956831fea06dde1693d"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.833871 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"1be43cfe215d5c854f73e730b03a0f9be0055518089e1d063daefc3891e495f0"} Feb 16 14:58:57 crc kubenswrapper[4705]: I0216 14:58:57.834345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"c9d6e81e00e6a49b1afaa3fbaa3c4ce51992a2410a8e7e2ae3afcce8a1821a7c"} Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.003744 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 14:58:58 crc kubenswrapper[4705]: W0216 14:58:58.012765 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8934da22_3ea4_4b0b_be02_6062165cdc7b.slice/crio-7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d WatchSource:0}: Error finding container 7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d: Status 404 returned error can't find the container with id 7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.038440 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.048439 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.052182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.052988 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bf60ue0kt7k38" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053200 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053349 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.053456 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.054122 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.054303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072200 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072332 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072409 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.072461 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073552 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.073570 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183539 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183630 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183718 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183772 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.183795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.191976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/515dd6a4-4119-4c19-8d36-fcaf9df137ba-metrics-client-ca\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.205720 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.211001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.211662 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.214230 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.214520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.216104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/515dd6a4-4119-4c19-8d36-fcaf9df137ba-secret-grpc-tls\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.218827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrq42\" (UniqueName: \"kubernetes.io/projected/515dd6a4-4119-4c19-8d36-fcaf9df137ba-kube-api-access-zrq42\") pod \"thanos-querier-5d57ff9f57-lk2s6\" (UID: \"515dd6a4-4119-4c19-8d36-fcaf9df137ba\") " pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.374513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.841440 4705 generic.go:334] "Generic (PLEG): container finished" podID="5b3841cd-a0f0-481c-9a3e-4bee8df62db2" containerID="318162a536ab2026b02263dfe1cda4a0c5e93bbfdbcd47bdcd459fc7d0b8d4f9" exitCode=0 Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.841484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerDied","Data":"318162a536ab2026b02263dfe1cda4a0c5e93bbfdbcd47bdcd459fc7d0b8d4f9"} Feb 16 14:58:58 crc kubenswrapper[4705]: I0216 14:58:58.842864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"7f9e032af2549bb7da3ccbc713365395b5788599b8e4b643f904f7adfbce258d"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.575942 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6"] Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"426a9d8d5c29662c9db0b6b3816f93e8090cb0f6179947489103d38c8f0a334d"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852213 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"d7749e96570da9e34f747fa92618268b8f4d665f615634cb6452db7542c7d07a"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.852224 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" event={"ID":"8b0767c1-7dc6-4c17-baa7-34f91d1f7207","Type":"ContainerStarted","Data":"0e48fbee67bf0a20b366db6b56a564eded959a7c79f5d6541b23114c9a55d2b8"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.856833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"b233f064d4b94b1967e7b91880bed2c0d0dbd1fe86ddac4b3bc6e8be441b80a1"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.856904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6vxhj" event={"ID":"5b3841cd-a0f0-481c-9a3e-4bee8df62db2","Type":"ContainerStarted","Data":"4542cec339befec468e9da9511dc4622bc961f47f402e263d321d6d438fca981"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.859130 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"1dd5baf26a8eaf740b8c98463ec34682b8be890fdc1f1129358afe0010c97a8a"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.864936 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" event={"ID":"a10863da-bf1a-4f07-8ffc-4d05deba027a","Type":"ContainerStarted","Data":"f38a5533d5f5de9e5f56eb17dbfdb6a069ca8b6b1f943abcef250cac3c2a5c53"} Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.882464 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qdnlj" podStartSLOduration=2.104681522 podStartE2EDuration="3.882439965s" podCreationTimestamp="2026-02-16 14:58:56 +0000 UTC" firstStartedPulling="2026-02-16 14:58:57.350219422 +0000 UTC m=+331.535196498" lastFinishedPulling="2026-02-16 14:58:59.127977865 +0000 UTC m=+333.312954941" observedRunningTime="2026-02-16 14:58:59.871557929 +0000 UTC m=+334.056535005" watchObservedRunningTime="2026-02-16 14:58:59.882439965 +0000 UTC m=+334.067417041" Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.899292 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-7xc7z" podStartSLOduration=2.83198908 podStartE2EDuration="4.89927352s" podCreationTimestamp="2026-02-16 14:58:55 +0000 UTC" firstStartedPulling="2026-02-16 14:58:57.040488305 +0000 UTC m=+331.225465381" lastFinishedPulling="2026-02-16 14:58:59.107772735 +0000 UTC m=+333.292749821" observedRunningTime="2026-02-16 14:58:59.893490887 +0000 UTC m=+334.078467973" watchObservedRunningTime="2026-02-16 14:58:59.89927352 +0000 UTC m=+334.084250596" Feb 16 14:58:59 crc kubenswrapper[4705]: I0216 14:58:59.914718 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-6vxhj" podStartSLOduration=2.5504684749999997 podStartE2EDuration="3.914695675s" podCreationTimestamp="2026-02-16 14:58:56 +0000 UTC" firstStartedPulling="2026-02-16 14:58:56.426474726 +0000 UTC m=+330.611451802" lastFinishedPulling="2026-02-16 14:58:57.790701926 +0000 UTC m=+331.975679002" observedRunningTime="2026-02-16 14:58:59.912254426 +0000 UTC m=+334.097231542" watchObservedRunningTime="2026-02-16 14:58:59.914695675 +0000 UTC m=+334.099672751" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.236155 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.237583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241258 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241431 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241494 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.241569 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ecjvii5sj4rci" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.248657 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.255585 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335183 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335407 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335438 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335613 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.335674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436540 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436599 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.436679 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.438134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.438818 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/830c9eb2-2fd1-4213-9067-d1df432bc535-audit-log\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.439329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/830c9eb2-2fd1-4213-9067-d1df432bc535-metrics-server-audit-profiles\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.445698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-client-certs\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.453310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-secret-metrics-server-tls\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.455829 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cm9j\" (UniqueName: \"kubernetes.io/projected/830c9eb2-2fd1-4213-9067-d1df432bc535-kube-api-access-8cm9j\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.457296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830c9eb2-2fd1-4213-9067-d1df432bc535-client-ca-bundle\") pod \"metrics-server-85b67b995c-f7f68\" (UID: \"830c9eb2-2fd1-4213-9067-d1df432bc535\") " pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.600000 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.684325 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.684398 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.878598 4705 generic.go:334] "Generic (PLEG): container finished" podID="8934da22-3ea4-4b0b-be02-6062165cdc7b" containerID="78e9a508715e358142a62884bc384aae0d71c81187121bcab83abe45207704c4" exitCode=0 Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.878672 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerDied","Data":"78e9a508715e358142a62884bc384aae0d71c81187121bcab83abe45207704c4"} Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.992168 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.992919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.995614 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 16 14:59:01 crc kubenswrapper[4705]: I0216 14:59:01.996912 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.009833 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.024769 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b67b995c-f7f68"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.049592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.151029 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.157357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9b846d4f-0232-4904-8b2c-26faa7b2a55d-monitoring-plugin-cert\") pod \"monitoring-plugin-59b55b8b7f-pbcb6\" (UID: \"9b846d4f-0232-4904-8b2c-26faa7b2a55d\") " pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.319254 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.434719 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.457805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.462402 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.462689 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463106 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.463921 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.464412 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.465842 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.467305 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.467671 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a4a5ql6fgckom" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.468030 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.477145 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.478147 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.480583 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560976 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.560996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561050 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561090 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561255 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561276 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561291 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.561419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.663489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.663543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664095 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664120 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664147 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664252 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664272 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664307 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.664333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665463 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665504 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665679 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.665819 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.666430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.667460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.670430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.671519 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-web-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.671735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.672472 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config-out\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.673190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.673731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.675971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-config\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.676744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.678183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.680059 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.681394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpdc\" (UniqueName: \"kubernetes.io/projected/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-kube-api-access-4xpdc\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.682155 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.693313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8232e0b2-8d33-4cf9-a367-5c1dc59b8629-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8232e0b2-8d33-4cf9-a367-5c1dc59b8629\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: W0216 14:59:02.743996 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830c9eb2_2fd1_4213_9067_d1df432bc535.slice/crio-c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359 WatchSource:0}: Error finding container c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359: Status 404 returned error can't find the container with id c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359 Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.788945 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:02 crc kubenswrapper[4705]: I0216 14:59:02.889917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" event={"ID":"830c9eb2-2fd1-4213-9067-d1df432bc535","Type":"ContainerStarted","Data":"c5926489f24eef3042f9622cd23504af2700ef73493f3629faeb4cfab4c30359"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.005013 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6"] Feb 16 14:59:03 crc kubenswrapper[4705]: W0216 14:59:03.036561 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b846d4f_0232_4904_8b2c_26faa7b2a55d.slice/crio-8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8 WatchSource:0}: Error finding container 8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8: Status 404 returned error can't find the container with id 8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.109279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 14:59:03 crc kubenswrapper[4705]: W0216 14:59:03.129412 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8232e0b2_8d33_4cf9_a367_5c1dc59b8629.slice/crio-e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40 WatchSource:0}: Error finding container e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40: Status 404 returned error can't find the container with id e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.909726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" event={"ID":"9b846d4f-0232-4904-8b2c-26faa7b2a55d","Type":"ContainerStarted","Data":"8314839583ed825f55327a72f1fc0bf745bda44c546ae745cc53444b4fc562f8"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.913979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"3349fdd1e3e817add8ae172e707bcfa41a80ee06f20ceb9ecdb59e4a4034499d"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.914065 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"5a6ac6ca2fc95c150224da3d41536f56bfac50db1519a4acfff69219d12973be"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.914082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"77338a05dd068e237d5a9fd67b6fcee42963e4d99d4aa115078ed787618ed911"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916477 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="518c82b812147b0274e50839da693ef12ca4cec3f8311e89b254aeb0fdcfffba" exitCode=0 Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916516 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"518c82b812147b0274e50839da693ef12ca4cec3f8311e89b254aeb0fdcfffba"} Feb 16 14:59:03 crc kubenswrapper[4705]: I0216 14:59:03.916538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"e4270142c4921b58579506637b3a02eb8337566fb3e80c947cb50832b60a2c40"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"34a7f3b720b05a75927d551d31375e6e4fd2b40396c81a83bf130ddd06e8bd47"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"840c248a162da06d72f59e9121519b947f80132aac5d03cc2a1c61ea06b2102c"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.941937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" event={"ID":"515dd6a4-4119-4c19-8d36-fcaf9df137ba","Type":"ContainerStarted","Data":"5f41a84ccee1298292baea7ad2bd89aa48e6fc8584cdd46fdbe4262c72b2b6cb"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.942614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.947026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" event={"ID":"830c9eb2-2fd1-4213-9067-d1df432bc535","Type":"ContainerStarted","Data":"45a42fc52e6365ed11e173699d7cc7a3eafe001f590daccbed6f5b8675aa8f8a"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"cf05a2079c65f7fa894d38c17011719276cd795fd9b25d6597276dcbf64ca0b7"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"d766af1fd5f17efd05a2e026f0cb1ecebf437dd8b88a2f9dd68af1ec4838776e"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951257 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"1cef49e463d58084bc213af83a2d88022b5f720f7311b0603015dc58897b07d7"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.951267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"0c4fb87720547cfbd329c2ba327ab6de0321d09fc1ecd4506f7d20bb9ed37300"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.955414 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" event={"ID":"9b846d4f-0232-4904-8b2c-26faa7b2a55d","Type":"ContainerStarted","Data":"1a9bc57ca1ab3bca140012ad8ee7f70e50a4e10c8cee3c21b87b6898cf96b159"} Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.955590 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.960750 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.983926 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" podStartSLOduration=2.337809579 podStartE2EDuration="8.983906989s" podCreationTimestamp="2026-02-16 14:58:58 +0000 UTC" firstStartedPulling="2026-02-16 14:58:59.589100531 +0000 UTC m=+333.774077607" lastFinishedPulling="2026-02-16 14:59:06.235197941 +0000 UTC m=+340.420175017" observedRunningTime="2026-02-16 14:59:06.967439815 +0000 UTC m=+341.152416891" watchObservedRunningTime="2026-02-16 14:59:06.983906989 +0000 UTC m=+341.168884055" Feb 16 14:59:06 crc kubenswrapper[4705]: I0216 14:59:06.984913 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-59b55b8b7f-pbcb6" podStartSLOduration=2.851742894 podStartE2EDuration="5.984908078s" podCreationTimestamp="2026-02-16 14:59:01 +0000 UTC" firstStartedPulling="2026-02-16 14:59:03.039414311 +0000 UTC m=+337.224391387" lastFinishedPulling="2026-02-16 14:59:06.172579495 +0000 UTC m=+340.357556571" observedRunningTime="2026-02-16 14:59:06.97965592 +0000 UTC m=+341.164632996" watchObservedRunningTime="2026-02-16 14:59:06.984908078 +0000 UTC m=+341.169885154" Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.009055 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" podStartSLOduration=2.586066921 podStartE2EDuration="6.009011528s" podCreationTimestamp="2026-02-16 14:59:01 +0000 UTC" firstStartedPulling="2026-02-16 14:59:02.758046325 +0000 UTC m=+336.943023401" lastFinishedPulling="2026-02-16 14:59:06.180990932 +0000 UTC m=+340.365968008" observedRunningTime="2026-02-16 14:59:07.002824903 +0000 UTC m=+341.187801989" watchObservedRunningTime="2026-02-16 14:59:07.009011528 +0000 UTC m=+341.193988604" Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.970167 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"7bab3224b85bd914e698521ac62aa5681e1d44413c4a618247c33cc5e42abeb3"} Feb 16 14:59:07 crc kubenswrapper[4705]: I0216 14:59:07.970579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8934da22-3ea4-4b0b-be02-6062165cdc7b","Type":"ContainerStarted","Data":"fbf594a2c490f645034f6d4e48201791a5706220d5b67a82a6bed6f43a1e240d"} Feb 16 14:59:08 crc kubenswrapper[4705]: I0216 14:59:08.006386 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.850021672 podStartE2EDuration="11.006344859s" podCreationTimestamp="2026-02-16 14:58:57 +0000 UTC" firstStartedPulling="2026-02-16 14:58:58.015515517 +0000 UTC m=+332.200492593" lastFinishedPulling="2026-02-16 14:59:06.171838694 +0000 UTC m=+340.356815780" observedRunningTime="2026-02-16 14:59:08.004775934 +0000 UTC m=+342.189753020" watchObservedRunningTime="2026-02-16 14:59:08.006344859 +0000 UTC m=+342.191321945" Feb 16 14:59:08 crc kubenswrapper[4705]: I0216 14:59:08.389631 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5d57ff9f57-lk2s6" Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.992148 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/0.log" Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993649 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" exitCode=1 Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993695 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"718162dc642ffeb642787a06a6a625c6ba36e4bfeb82dcf2c75c23e9b2e4a519"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993719 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"bc3542bab90925422c40bd8a540b599627c5a460fa0f8327aba31ea0526307bc"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"7b29500bfd6886644b54142d0e54382aaf8a13889668df2ef6410dcae626c085"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993740 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"27f2bd7c4f49fe67b1d744f33d6dcfa8f5aedaa49d8ba1f32763a3496c9078af"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"b2b436d599380e4cd78818bbe08627018e351f142c0c8694d53d184e917f912b"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.993759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a"} Feb 16 14:59:09 crc kubenswrapper[4705]: I0216 14:59:09.994258 4705 scope.go:117] "RemoveContainer" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.002724 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.005887 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/0.log" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006320 4705 generic.go:334] "Generic (PLEG): container finished" podID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" exitCode=1 Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerDied","Data":"3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39"} Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.006449 4705 scope.go:117] "RemoveContainer" containerID="bdaca3e765f591ccc4e1aa4b5f468fb316ba7f9b599cdce288cab982292bdb1a" Feb 16 14:59:11 crc kubenswrapper[4705]: I0216 14:59:11.007594 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:11 crc kubenswrapper[4705]: E0216 14:59:11.008528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.014719 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.018385 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:12 crc kubenswrapper[4705]: E0216 14:59:12.018897 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.790715 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:12 crc kubenswrapper[4705]: I0216 14:59:12.791157 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:13 crc kubenswrapper[4705]: I0216 14:59:13.023462 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:13 crc kubenswrapper[4705]: E0216 14:59:13.023907 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(8232e0b2-8d33-4cf9-a367-5c1dc59b8629)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="8232e0b2-8d33-4cf9-a367-5c1dc59b8629" Feb 16 14:59:21 crc kubenswrapper[4705]: I0216 14:59:21.600425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:21 crc kubenswrapper[4705]: I0216 14:59:21.601279 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:23 crc kubenswrapper[4705]: I0216 14:59:23.420340 4705 scope.go:117] "RemoveContainer" containerID="3e7b75ae66e4f17162e543f28e0e441296d69a7c48430a1b1974d93ac129df39" Feb 16 14:59:24 crc kubenswrapper[4705]: I0216 14:59:24.135080 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8232e0b2-8d33-4cf9-a367-5c1dc59b8629/prometheus/1.log" Feb 16 14:59:24 crc kubenswrapper[4705]: I0216 14:59:24.137877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8232e0b2-8d33-4cf9-a367-5c1dc59b8629","Type":"ContainerStarted","Data":"795a66c27c459d21aa086d47134bfc76ed07733769f38c683d09760d10e91e2e"} Feb 16 14:59:27 crc kubenswrapper[4705]: I0216 14:59:27.789877 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.239148 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=23.232881048 podStartE2EDuration="28.239126484s" podCreationTimestamp="2026-02-16 14:59:02 +0000 UTC" firstStartedPulling="2026-02-16 14:59:03.920653298 +0000 UTC m=+338.105630374" lastFinishedPulling="2026-02-16 14:59:08.926898734 +0000 UTC m=+343.111875810" observedRunningTime="2026-02-16 14:59:24.190077213 +0000 UTC m=+358.375054359" watchObservedRunningTime="2026-02-16 14:59:30.239126484 +0000 UTC m=+364.424103570" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.246731 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.247686 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.262157 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.291998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.292202 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393898 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.393984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394024 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394058 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394084 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.394108 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.395443 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.395492 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.396170 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.397001 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.401959 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.402903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.412338 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"console-57c5b94cd8-vqsl6\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.575748 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.823049 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.849694 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.852576 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.854837 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.858353 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911145 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:30 crc kubenswrapper[4705]: I0216 14:59:30.911347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.013939 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.037285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"community-operators-j2v29\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.173404 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.190959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerStarted","Data":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.191035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerStarted","Data":"638ba6eacff71725b50db8f008ac8fcbf0b93dd5e605bf9a759eecda45bb8f53"} Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.211078 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57c5b94cd8-vqsl6" podStartSLOduration=1.211054807 podStartE2EDuration="1.211054807s" podCreationTimestamp="2026-02-16 14:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:31.206295063 +0000 UTC m=+365.391272139" watchObservedRunningTime="2026-02-16 14:59:31.211054807 +0000 UTC m=+365.396031883" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.441153 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.444062 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.447944 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.455212 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519215 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.519251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.620558 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.621031 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.621146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.622311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-catalog-content\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.622403 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cf3246-f6e6-4509-bde8-6f5db1285126-utilities\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.643878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mh6\" (UniqueName: \"kubernetes.io/projected/f7cf3246-f6e6-4509-bde8-6f5db1285126-kube-api-access-77mh6\") pod \"certified-operators-x6x46\" (UID: \"f7cf3246-f6e6-4509-bde8-6f5db1285126\") " pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.687690 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.687763 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.731001 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.779462 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.965095 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.966394 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:31 crc kubenswrapper[4705]: I0216 14:59:31.998227 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130115 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130221 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130400 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130446 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130477 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.130506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.153851 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.199794 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" exitCode=0 Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.199914 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88"} Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.201078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerStarted","Data":"abefdacd3131f9637e18b5d6a682929bf8b75c5123f9e2a087bae18c0b3b4aa0"} Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.232873 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.233334 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.235734 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-certificates\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.235871 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.236331 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-trusted-ca\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.240075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-registry-tls\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.240311 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.257728 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-bound-sa-token\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.260305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd4mc\" (UniqueName: \"kubernetes.io/projected/81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb-kube-api-access-jd4mc\") pod \"image-registry-66df7c8f76-wjxs2\" (UID: \"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.281887 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6x46"] Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.290244 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:32 crc kubenswrapper[4705]: I0216 14:59:32.727654 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wjxs2"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.032939 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.034763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.038049 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.043082 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.050896 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153279 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153308 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-utilities\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.153881 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c9c10e6-7615-4597-91c4-4a8c67ccf112-catalog-content\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.173382 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtkh\" (UniqueName: \"kubernetes.io/projected/3c9c10e6-7615-4597-91c4-4a8c67ccf112-kube-api-access-2rtkh\") pod \"redhat-marketplace-wptq4\" (UID: \"3c9c10e6-7615-4597-91c4-4a8c67ccf112\") " pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207514 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7cf3246-f6e6-4509-bde8-6f5db1285126" containerID="e092a01ef5c4c273c453ceee4671ff828745ff30bfa6f985a4c5ddebbf76e6e7" exitCode=0 Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207584 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerDied","Data":"e092a01ef5c4c273c453ceee4671ff828745ff30bfa6f985a4c5ddebbf76e6e7"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.207610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerStarted","Data":"db486cf16cd77c52e3c348a4b5b35de52a37858533c08c496a3c14deddef78ac"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.208982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" event={"ID":"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb","Type":"ContainerStarted","Data":"245fdd1cda28cee16ca4bf9c05e932cc7931b6d13004c144257c82ea0d6661cc"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.209084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" event={"ID":"81bb3a7c-7f3b-4fe3-8784-4e719bd66ddb","Type":"ContainerStarted","Data":"adf0bd6eef49eed06144397724636f5ed969d1ddf657fcec82cf9c9105bb9d84"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.209891 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.212639 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" exitCode=0 Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.212772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703"} Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.254584 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" podStartSLOduration=2.254563437 podStartE2EDuration="2.254563437s" podCreationTimestamp="2026-02-16 14:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:33.247328173 +0000 UTC m=+367.432305259" watchObservedRunningTime="2026-02-16 14:59:33.254563437 +0000 UTC m=+367.439540513" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.348173 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:33 crc kubenswrapper[4705]: I0216 14:59:33.817034 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wptq4"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.219872 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c9c10e6-7615-4597-91c4-4a8c67ccf112" containerID="7c12c180da63a0da85d707b61f6b0ea37b59a3e80d87b1afefa45e70edc3b011" exitCode=0 Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.220026 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerDied","Data":"7c12c180da63a0da85d707b61f6b0ea37b59a3e80d87b1afefa45e70edc3b011"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.220331 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerStarted","Data":"f840be36fd4c2e71a952d516da4bd3e8ba40207d34a76fb2a691ea36620eeb72"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.224025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerStarted","Data":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.292298 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2v29" podStartSLOduration=2.706469627 podStartE2EDuration="4.292277137s" podCreationTimestamp="2026-02-16 14:59:30 +0000 UTC" firstStartedPulling="2026-02-16 14:59:32.204124218 +0000 UTC m=+366.389101294" lastFinishedPulling="2026-02-16 14:59:33.789931728 +0000 UTC m=+367.974908804" observedRunningTime="2026-02-16 14:59:34.288239484 +0000 UTC m=+368.473216560" watchObservedRunningTime="2026-02-16 14:59:34.292277137 +0000 UTC m=+368.477254213" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.433120 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.435695 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.438983 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.443653 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.577921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.679762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.680265 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-utilities\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.680884 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/615ad81b-0e00-4b06-88eb-970b4e942b56-catalog-content\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.702588 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwqz\" (UniqueName: \"kubernetes.io/projected/615ad81b-0e00-4b06-88eb-970b4e942b56-kube-api-access-gxwqz\") pod \"redhat-operators-dzbk2\" (UID: \"615ad81b-0e00-4b06-88eb-970b4e942b56\") " pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:34 crc kubenswrapper[4705]: I0216 14:59:34.770679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.210275 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzbk2"] Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.231891 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerStarted","Data":"2fe1afb2218ad27aa64a34391ad945ffc3289fcf06444335463fd768ddee689c"} Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.235183 4705 generic.go:334] "Generic (PLEG): container finished" podID="f7cf3246-f6e6-4509-bde8-6f5db1285126" containerID="c0c79e3f53b996269456c02ee6a6774f2b46f3bcf728aff3ab1897d9622b86d5" exitCode=0 Feb 16 14:59:35 crc kubenswrapper[4705]: I0216 14:59:35.235250 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerDied","Data":"c0c79e3f53b996269456c02ee6a6774f2b46f3bcf728aff3ab1897d9622b86d5"} Feb 16 14:59:37 crc kubenswrapper[4705]: E0216 14:59:37.108969 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.252462 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c9c10e6-7615-4597-91c4-4a8c67ccf112" containerID="60e8f536beaa6982622aac5d46efbaf8b72a6459fc8c7a3c13f7aab229f379fe" exitCode=0 Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.252532 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerDied","Data":"60e8f536beaa6982622aac5d46efbaf8b72a6459fc8c7a3c13f7aab229f379fe"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.254664 4705 generic.go:334] "Generic (PLEG): container finished" podID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerID="7e052e17d6ead6b7fdfd5a184438404e71c8236333bb41c9f4c77f29414f73c5" exitCode=0 Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.254762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerDied","Data":"7e052e17d6ead6b7fdfd5a184438404e71c8236333bb41c9f4c77f29414f73c5"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.257942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6x46" event={"ID":"f7cf3246-f6e6-4509-bde8-6f5db1285126","Type":"ContainerStarted","Data":"f05d3d299868582465f9bd1a5cc5b56cde2fbd6fe692c396cd238a55a94f3980"} Feb 16 14:59:37 crc kubenswrapper[4705]: I0216 14:59:37.303981 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6x46" podStartSLOduration=3.652180268 podStartE2EDuration="6.303961255s" podCreationTimestamp="2026-02-16 14:59:31 +0000 UTC" firstStartedPulling="2026-02-16 14:59:33.209434404 +0000 UTC m=+367.394411480" lastFinishedPulling="2026-02-16 14:59:35.861215391 +0000 UTC m=+370.046192467" observedRunningTime="2026-02-16 14:59:37.29882594 +0000 UTC m=+371.483803016" watchObservedRunningTime="2026-02-16 14:59:37.303961255 +0000 UTC m=+371.488938331" Feb 16 14:59:38 crc kubenswrapper[4705]: I0216 14:59:38.267395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wptq4" event={"ID":"3c9c10e6-7615-4597-91c4-4a8c67ccf112","Type":"ContainerStarted","Data":"6be444772eb44f090f34f396cbf185e43513811ca2d8778d41a10071e164383f"} Feb 16 14:59:38 crc kubenswrapper[4705]: I0216 14:59:38.296048 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wptq4" podStartSLOduration=1.85856395 podStartE2EDuration="5.296015277s" podCreationTimestamp="2026-02-16 14:59:33 +0000 UTC" firstStartedPulling="2026-02-16 14:59:34.222214151 +0000 UTC m=+368.407191227" lastFinishedPulling="2026-02-16 14:59:37.659665468 +0000 UTC m=+371.844642554" observedRunningTime="2026-02-16 14:59:38.285524691 +0000 UTC m=+372.470501787" watchObservedRunningTime="2026-02-16 14:59:38.296015277 +0000 UTC m=+372.480992363" Feb 16 14:59:39 crc kubenswrapper[4705]: I0216 14:59:39.280865 4705 generic.go:334] "Generic (PLEG): container finished" podID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerID="570497e729fea4bc5d83de6f9c83cb3b427c22f83907d8ee06734c839c14d70b" exitCode=0 Feb 16 14:59:39 crc kubenswrapper[4705]: I0216 14:59:39.282198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerDied","Data":"570497e729fea4bc5d83de6f9c83cb3b427c22f83907d8ee06734c839c14d70b"} Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.576574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.576747 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:40 crc kubenswrapper[4705]: I0216 14:59:40.583773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.174032 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.174565 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.235551 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.301982 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.363259 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.392926 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2v29" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.614180 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.619871 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-85b67b995c-f7f68" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.779844 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.779908 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:41 crc kubenswrapper[4705]: I0216 14:59:41.833098 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.305843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzbk2" event={"ID":"615ad81b-0e00-4b06-88eb-970b4e942b56","Type":"ContainerStarted","Data":"4d1fe6b812c56a820cea55ee65b2fee9df6b1cf717d9ce392c279a8c717277c9"} Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.327211 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzbk2" podStartSLOduration=4.029047106 podStartE2EDuration="8.32719229s" podCreationTimestamp="2026-02-16 14:59:34 +0000 UTC" firstStartedPulling="2026-02-16 14:59:37.255823617 +0000 UTC m=+371.440800693" lastFinishedPulling="2026-02-16 14:59:41.553968801 +0000 UTC m=+375.738945877" observedRunningTime="2026-02-16 14:59:42.326701536 +0000 UTC m=+376.511678612" watchObservedRunningTime="2026-02-16 14:59:42.32719229 +0000 UTC m=+376.512169366" Feb 16 14:59:42 crc kubenswrapper[4705]: I0216 14:59:42.352295 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6x46" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.348318 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.348871 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:43 crc kubenswrapper[4705]: I0216 14:59:43.402884 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.376184 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wptq4" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.770950 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:44 crc kubenswrapper[4705]: I0216 14:59:44.771437 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:45 crc kubenswrapper[4705]: I0216 14:59:45.826260 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzbk2" podUID="615ad81b-0e00-4b06-88eb-970b4e942b56" containerName="registry-server" probeResult="failure" output=< Feb 16 14:59:45 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 14:59:45 crc kubenswrapper[4705]: > Feb 16 14:59:52 crc kubenswrapper[4705]: I0216 14:59:52.299951 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wjxs2" Feb 16 14:59:52 crc kubenswrapper[4705]: I0216 14:59:52.370754 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 14:59:54 crc kubenswrapper[4705]: I0216 14:59:54.852811 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 14:59:54 crc kubenswrapper[4705]: I0216 14:59:54.930688 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzbk2" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.203857 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.206138 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.208480 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.208723 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.219507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.321790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.322080 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.322325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.423980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.424469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.424529 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.425451 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.436382 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.449722 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"collect-profiles-29520900-6bdkx\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.575207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:00 crc kubenswrapper[4705]: I0216 15:00:00.775570 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:00:00 crc kubenswrapper[4705]: W0216 15:00:00.791079 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c9b6f2_f412_4860_9524_8b671c477f83.slice/crio-cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d WatchSource:0}: Error finding container cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d: Status 404 returned error can't find the container with id cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.454526 4705 generic.go:334] "Generic (PLEG): container finished" podID="24c9b6f2-f412-4860-9524-8b671c477f83" containerID="6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365" exitCode=0 Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.454978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerDied","Data":"6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365"} Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.455019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerStarted","Data":"cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d"} Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684565 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684663 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.684729 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.685528 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:00:01 crc kubenswrapper[4705]: I0216 15:00:01.685607 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" gracePeriod=600 Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.469904 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" exitCode=0 Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308"} Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.470932 4705 scope.go:117] "RemoveContainer" containerID="8e9fa96ce8f5bddc6e0ef583db04bb7fbd25175e73b8bba89d1b56f385bb031a" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.721033 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780294 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.780543 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") pod \"24c9b6f2-f412-4860-9524-8b671c477f83\" (UID: \"24c9b6f2-f412-4860-9524-8b671c477f83\") " Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.781819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume" (OuterVolumeSpecName: "config-volume") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.785691 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl" (OuterVolumeSpecName: "kube-api-access-m96xl") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "kube-api-access-m96xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.790256 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.793513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24c9b6f2-f412-4860-9524-8b671c477f83" (UID: "24c9b6f2-f412-4860-9524-8b671c477f83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.823082 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883084 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c9b6f2-f412-4860-9524-8b671c477f83-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883503 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96xl\" (UniqueName: \"kubernetes.io/projected/24c9b6f2-f412-4860-9524-8b671c477f83-kube-api-access-m96xl\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:02 crc kubenswrapper[4705]: I0216 15:00:02.883651 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c9b6f2-f412-4860-9524-8b671c477f83-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" event={"ID":"24c9b6f2-f412-4860-9524-8b671c477f83","Type":"ContainerDied","Data":"cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d"} Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481627 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb76d46b34cd67982f348d874d4dd55ecca96018cef4c3876cb95593c7b3881d" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.481182 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx" Feb 16 15:00:03 crc kubenswrapper[4705]: I0216 15:00:03.535679 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.409480 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fnrqq" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" containerID="cri-o://3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" gracePeriod=15 Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.828176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fnrqq_ee710a8b-3390-4749-949f-e8efa983b1ae/console/0.log" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.828659 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.961692 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962181 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962398 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962518 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962598 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.962650 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963556 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") pod \"ee710a8b-3390-4749-949f-e8efa983b1ae\" (UID: \"ee710a8b-3390-4749-949f-e8efa983b1ae\") " Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963779 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config" (OuterVolumeSpecName: "console-config") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.963916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.965654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca" (OuterVolumeSpecName: "service-ca") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.966858 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967007 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967040 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.967060 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee710a8b-3390-4749-949f-e8efa983b1ae-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.970706 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs" (OuterVolumeSpecName: "kube-api-access-stnhs") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "kube-api-access-stnhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.971603 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:06 crc kubenswrapper[4705]: I0216 15:00:06.974508 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ee710a8b-3390-4749-949f-e8efa983b1ae" (UID: "ee710a8b-3390-4749-949f-e8efa983b1ae"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.068766 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stnhs\" (UniqueName: \"kubernetes.io/projected/ee710a8b-3390-4749-949f-e8efa983b1ae-kube-api-access-stnhs\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.069017 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.069117 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee710a8b-3390-4749-949f-e8efa983b1ae-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525511 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fnrqq_ee710a8b-3390-4749-949f-e8efa983b1ae/console/0.log" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525602 4705 generic.go:334] "Generic (PLEG): container finished" podID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" exitCode=2 Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerDied","Data":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fnrqq" event={"ID":"ee710a8b-3390-4749-949f-e8efa983b1ae","Type":"ContainerDied","Data":"7a32273060fa9c5acf759e7781d16b8a6a0afc21afb3ce21b1bb14a5f231b5c2"} Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.525761 4705 scope.go:117] "RemoveContainer" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.526063 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fnrqq" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.553098 4705 scope.go:117] "RemoveContainer" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: E0216 15:00:07.554690 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": container with ID starting with 3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519 not found: ID does not exist" containerID="3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.554754 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519"} err="failed to get container status \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": rpc error: code = NotFound desc = could not find container \"3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519\": container with ID starting with 3cdc2e6126c10582c872368e6dd5522ee6b607e67e9c59754c2091019542f519 not found: ID does not exist" Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.572768 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 15:00:07 crc kubenswrapper[4705]: I0216 15:00:07.578783 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fnrqq"] Feb 16 15:00:08 crc kubenswrapper[4705]: I0216 15:00:08.430556 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" path="/var/lib/kubelet/pods/ee710a8b-3390-4749-949f-e8efa983b1ae/volumes" Feb 16 15:00:17 crc kubenswrapper[4705]: I0216 15:00:17.427869 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" containerID="cri-o://8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" gracePeriod=30 Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.039107 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.105441 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106268 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106356 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106438 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106577 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.106662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") pod \"347b9dab-29d3-4126-994e-6501af72985a\" (UID: \"347b9dab-29d3-4126-994e-6501af72985a\") " Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.107069 4705 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.107581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx" (OuterVolumeSpecName: "kube-api-access-xs7sx") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "kube-api-access-xs7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111346 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.111860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.119521 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.137026 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "347b9dab-29d3-4126-994e-6501af72985a" (UID: "347b9dab-29d3-4126-994e-6501af72985a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208533 4705 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/347b9dab-29d3-4126-994e-6501af72985a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208603 4705 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208633 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs7sx\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-kube-api-access-xs7sx\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208664 4705 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/347b9dab-29d3-4126-994e-6501af72985a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208688 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/347b9dab-29d3-4126-994e-6501af72985a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.208710 4705 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/347b9dab-29d3-4126-994e-6501af72985a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617472 4705 generic.go:334] "Generic (PLEG): container finished" podID="347b9dab-29d3-4126-994e-6501af72985a" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" exitCode=0 Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerDied","Data":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617548 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4msnt" event={"ID":"347b9dab-29d3-4126-994e-6501af72985a","Type":"ContainerDied","Data":"a85e7e62d04fb828a3650bdfb354f55b8cca777243fccbeb90166d171d6b20fc"} Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.617585 4705 scope.go:117] "RemoveContainer" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.639311 4705 scope.go:117] "RemoveContainer" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: E0216 15:00:19.639767 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": container with ID starting with 8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3 not found: ID does not exist" containerID="8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.639809 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3"} err="failed to get container status \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": rpc error: code = NotFound desc = could not find container \"8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3\": container with ID starting with 8bd1c70e7e4a4a55fa580b35226017d23a552902e00465838279643f0bab2ac3 not found: ID does not exist" Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.651930 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 15:00:19 crc kubenswrapper[4705]: I0216 15:00:19.656075 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4msnt"] Feb 16 15:00:20 crc kubenswrapper[4705]: I0216 15:00:20.431139 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347b9dab-29d3-4126-994e-6501af72985a" path="/var/lib/kubelet/pods/347b9dab-29d3-4126-994e-6501af72985a/volumes" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.635313 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636565 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636589 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636625 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: E0216 15:01:00.636657 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636673 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636875 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee710a8b-3390-4749-949f-e8efa983b1ae" containerName="console" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636905 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="347b9dab-29d3-4126-994e-6501af72985a" containerName="registry" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.636926 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" containerName="collect-profiles" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.637694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.656109 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750850 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750905 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750943 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.750967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751245 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.751465 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852813 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.852964 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853230 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.853275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854550 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.854666 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.855153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.861089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.861777 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:00 crc kubenswrapper[4705]: I0216 15:01:00.871955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"console-7bb776c56c-pzs4q\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.023195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.536139 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:01:01 crc kubenswrapper[4705]: W0216 15:01:01.549190 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80172f35_e30c_409c_b28e_eb65d41dd384.slice/crio-62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b WatchSource:0}: Error finding container 62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b: Status 404 returned error can't find the container with id 62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.967051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerStarted","Data":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.967136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerStarted","Data":"62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b"} Feb 16 15:01:01 crc kubenswrapper[4705]: I0216 15:01:01.988952 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb776c56c-pzs4q" podStartSLOduration=1.988920786 podStartE2EDuration="1.988920786s" podCreationTimestamp="2026-02-16 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:01:01.988853994 +0000 UTC m=+456.173831070" watchObservedRunningTime="2026-02-16 15:01:01.988920786 +0000 UTC m=+456.173897902" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.023507 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.026632 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.031448 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.071721 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:01:11 crc kubenswrapper[4705]: I0216 15:01:11.152657 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.229135 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57c5b94cd8-vqsl6" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" containerID="cri-o://bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" gracePeriod=15 Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.628895 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c5b94cd8-vqsl6_32a46224-2f51-4cc5-9541-d1e5ac0d98eb/console/0.log" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.629493 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673796 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673869 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673947 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.673990 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.674029 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.674101 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") pod \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\" (UID: \"32a46224-2f51-4cc5-9541-d1e5ac0d98eb\") " Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.675156 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.675288 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config" (OuterVolumeSpecName: "console-config") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.676120 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca" (OuterVolumeSpecName: "service-ca") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.676143 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g" (OuterVolumeSpecName: "kube-api-access-hjj4g") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "kube-api-access-hjj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.682911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "32a46224-2f51-4cc5-9541-d1e5ac0d98eb" (UID: "32a46224-2f51-4cc5-9541-d1e5ac0d98eb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777150 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777193 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777206 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777221 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjj4g\" (UniqueName: \"kubernetes.io/projected/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-kube-api-access-hjj4g\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777236 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777247 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:36 crc kubenswrapper[4705]: I0216 15:01:36.777259 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32a46224-2f51-4cc5-9541-d1e5ac0d98eb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.282913 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57c5b94cd8-vqsl6_32a46224-2f51-4cc5-9541-d1e5ac0d98eb/console/0.log" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283439 4705 generic.go:334] "Generic (PLEG): container finished" podID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" exitCode=2 Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283513 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerDied","Data":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57c5b94cd8-vqsl6" event={"ID":"32a46224-2f51-4cc5-9541-d1e5ac0d98eb","Type":"ContainerDied","Data":"638ba6eacff71725b50db8f008ac8fcbf0b93dd5e605bf9a759eecda45bb8f53"} Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283608 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57c5b94cd8-vqsl6" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.283657 4705 scope.go:117] "RemoveContainer" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.323074 4705 scope.go:117] "RemoveContainer" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: E0216 15:01:37.323870 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": container with ID starting with bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836 not found: ID does not exist" containerID="bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.323934 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836"} err="failed to get container status \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": rpc error: code = NotFound desc = could not find container \"bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836\": container with ID starting with bd454cedce6f88418394bc59bc9dc76b77a05cd7e1bc55cab535878245480836 not found: ID does not exist" Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.329192 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:37 crc kubenswrapper[4705]: I0216 15:01:37.335239 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57c5b94cd8-vqsl6"] Feb 16 15:01:38 crc kubenswrapper[4705]: I0216 15:01:38.429347 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" path="/var/lib/kubelet/pods/32a46224-2f51-4cc5-9541-d1e5ac0d98eb/volumes" Feb 16 15:02:01 crc kubenswrapper[4705]: I0216 15:02:01.684462 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:01 crc kubenswrapper[4705]: I0216 15:02:01.685081 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:31 crc kubenswrapper[4705]: I0216 15:02:31.684767 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:31 crc kubenswrapper[4705]: I0216 15:02:31.685753 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.132250 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:57 crc kubenswrapper[4705]: E0216 15:02:57.133117 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.133132 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.133258 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a46224-2f51-4cc5-9541-d1e5ac0d98eb" containerName="console" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.134181 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.137930 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.160144 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.224982 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.225098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.225158 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326724 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.326783 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.327326 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.327631 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.355517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.471755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:02:57 crc kubenswrapper[4705]: I0216 15:02:57.779200 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl"] Feb 16 15:02:58 crc kubenswrapper[4705]: I0216 15:02:58.008579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerStarted","Data":"0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c"} Feb 16 15:02:58 crc kubenswrapper[4705]: I0216 15:02:58.009042 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerStarted","Data":"af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5"} Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.019892 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c" exitCode=0 Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.020001 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"0359100cb99e84b41217a7de1e79a7da3afdf45d2fc6a1ac7355b749dce5e44c"} Feb 16 15:02:59 crc kubenswrapper[4705]: I0216 15:02:59.022513 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.684699 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.685705 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.685785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.686838 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:03:01 crc kubenswrapper[4705]: I0216 15:03:01.686911 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" gracePeriod=600 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.052586 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" exitCode=0 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.052776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6"} Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.053302 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.053339 4705 scope.go:117] "RemoveContainer" containerID="a034fb5b1f0023b5e5ab28e7cd5612968c1d8e98f397066aaa9d090d45277308" Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.056664 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="a694af306eb8d0590e45cc51974fa037a409725ee7c9141fd04fa8be085ed648" exitCode=0 Feb 16 15:03:02 crc kubenswrapper[4705]: I0216 15:03:02.056772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"a694af306eb8d0590e45cc51974fa037a409725ee7c9141fd04fa8be085ed648"} Feb 16 15:03:03 crc kubenswrapper[4705]: I0216 15:03:03.065214 4705 generic.go:334] "Generic (PLEG): container finished" podID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerID="0b63100b539042dc74bc1fc2285d16764f13298cc19566f13ed0b77025455be3" exitCode=0 Feb 16 15:03:03 crc kubenswrapper[4705]: I0216 15:03:03.065674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"0b63100b539042dc74bc1fc2285d16764f13298cc19566f13ed0b77025455be3"} Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.294643 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.478439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.479149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.479281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") pod \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\" (UID: \"0d36f8fb-4d40-48ef-b2af-aee94e39388a\") " Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.481326 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle" (OuterVolumeSpecName: "bundle") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.488671 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq" (OuterVolumeSpecName: "kube-api-access-56zxq") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "kube-api-access-56zxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.504485 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util" (OuterVolumeSpecName: "util") pod "0d36f8fb-4d40-48ef-b2af-aee94e39388a" (UID: "0d36f8fb-4d40-48ef-b2af-aee94e39388a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581204 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581269 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d36f8fb-4d40-48ef-b2af-aee94e39388a-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:04 crc kubenswrapper[4705]: I0216 15:03:04.581288 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56zxq\" (UniqueName: \"kubernetes.io/projected/0d36f8fb-4d40-48ef-b2af-aee94e39388a-kube-api-access-56zxq\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086136 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" event={"ID":"0d36f8fb-4d40-48ef-b2af-aee94e39388a","Type":"ContainerDied","Data":"af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5"} Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086191 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9d587dc12e66cee3c869b48aa051d9ef95eae69828c2668c474b994769d2a5" Feb 16 15:03:05 crc kubenswrapper[4705]: I0216 15:03:05.086294 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl" Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.334945 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336064 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" containerID="cri-o://8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336142 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" containerID="cri-o://3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336193 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" containerID="cri-o://ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336235 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336270 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" containerID="cri-o://b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336237 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" containerID="cri-o://f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.336308 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" containerID="cri-o://7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" gracePeriod=30 Feb 16 15:03:08 crc kubenswrapper[4705]: I0216 15:03:08.367999 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" containerID="cri-o://38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" gracePeriod=30 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.118711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovnkube-controller/3.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.121227 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.121861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122361 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122414 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122424 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122433 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" exitCode=0 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122440 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" exitCode=143 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122447 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" exitCode=143 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122812 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.123015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.122894 4705 scope.go:117] "RemoveContainer" containerID="f2601e7c7270291a1e0e01f5182974ece5d5685bb008e9727c7d2797a7444262" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.123078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.124834 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126250 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/1.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126301 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ec06562-0237-4709-9469-033783d7d545" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" exitCode=2 Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerDied","Data":"c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6"} Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.126936 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.127178 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.156492 4705 scope.go:117] "RemoveContainer" containerID="797fa5cb882ced23ec870d7f3d356ca6e6506ac97a3849c6247a0516f6263105" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.536463 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.537480 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.537926 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586520 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586579 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586651 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586638 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586692 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586726 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586744 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586796 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586862 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.586974 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587004 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587042 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587045 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") pod \"59e81100-8761-4e5f-bab6-07df1c795ccb\" (UID: \"59e81100-8761-4e5f-bab6-07df1c795ccb\") " Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587456 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash" (OuterVolumeSpecName: "host-slash") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587721 4705 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587736 4705 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587751 4705 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587760 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587789 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587813 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587833 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log" (OuterVolumeSpecName: "node-log") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587870 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587914 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587934 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket" (OuterVolumeSpecName: "log-socket") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587952 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587969 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.587987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588432 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.588996 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.593071 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.601835 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5" (OuterVolumeSpecName: "kube-api-access-67wc5") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "kube-api-access-67wc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607586 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-drlsg"] Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607924 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="util" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607948 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="util" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607958 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607968 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607978 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.607987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.607997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608009 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608024 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608032 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608046 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608053 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608064 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608071 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608097 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608106 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608118 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608126 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608138 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="pull" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608147 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="pull" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608160 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608167 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608179 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608186 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608194 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608201 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608213 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kubecfg-setup" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608221 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kubecfg-setup" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608232 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608239 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608404 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608417 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="northd" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608430 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608440 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608450 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="nbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608462 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="kube-rbac-proxy-node" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608471 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="sbdb" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608482 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-acl-logging" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608500 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608509 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovn-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608517 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d36f8fb-4d40-48ef-b2af-aee94e39388a" containerName="extract" Feb 16 15:03:09 crc kubenswrapper[4705]: E0216 15:03:09.608678 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608687 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.608833 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.609083 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerName="ovnkube-controller" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.612893 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.637787 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "59e81100-8761-4e5f-bab6-07df1c795ccb" (UID: "59e81100-8761-4e5f-bab6-07df1c795ccb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689545 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689662 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689685 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.689933 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690013 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690154 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690476 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wc5\" (UniqueName: \"kubernetes.io/projected/59e81100-8761-4e5f-bab6-07df1c795ccb-kube-api-access-67wc5\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690493 4705 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690518 4705 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690527 4705 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690538 4705 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690547 4705 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690557 4705 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690567 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690591 4705 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690601 4705 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690611 4705 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690621 4705 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59e81100-8761-4e5f-bab6-07df1c795ccb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690631 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690640 4705 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59e81100-8761-4e5f-bab6-07df1c795ccb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690649 4705 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.690674 4705 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59e81100-8761-4e5f-bab6-07df1c795ccb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792144 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792166 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792230 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792310 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792359 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792402 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792431 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792471 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792491 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792561 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-etc-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792748 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-systemd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.792769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-node-log\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-env-overrides\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-netd\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793584 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-log-socket\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.793605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-ovn-kubernetes\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794037 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-config\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-ovn\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794100 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-slash\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794519 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovnkube-script-lib\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-run-netns\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794583 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-var-lib-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794902 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-run-openvswitch\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794934 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-cni-bin\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.794960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-systemd-units\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.795469 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc67360e-7dc8-4772-bc68-60709d7e4e31-host-kubelet\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.798401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc67360e-7dc8-4772-bc68-60709d7e4e31-ovn-node-metrics-cert\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.824717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpw8\" (UniqueName: \"kubernetes.io/projected/fc67360e-7dc8-4772-bc68-60709d7e4e31-kube-api-access-tjpw8\") pod \"ovnkube-node-drlsg\" (UID: \"fc67360e-7dc8-4772-bc68-60709d7e4e31\") " pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:09 crc kubenswrapper[4705]: I0216 15:03:09.957731 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.141714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-acl-logging/0.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143436 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tshhr_59e81100-8761-4e5f-bab6-07df1c795ccb/ovn-controller/0.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143799 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143847 4705 generic.go:334] "Generic (PLEG): container finished" podID="59e81100-8761-4e5f-bab6-07df1c795ccb" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143958 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.143990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" event={"ID":"59e81100-8761-4e5f-bab6-07df1c795ccb","Type":"ContainerDied","Data":"42045b84aca42a832078848d2b0993c882266e872a0d71d75f9c0c7f12bd5a14"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.144012 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.144231 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tshhr" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.150711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156491 4705 generic.go:334] "Generic (PLEG): container finished" podID="fc67360e-7dc8-4772-bc68-60709d7e4e31" containerID="3b59ce49ed456ee51dfd98110d67b37ffe27a7441b0fd7f28142a8cad073dbca" exitCode=0 Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerDied","Data":"3b59ce49ed456ee51dfd98110d67b37ffe27a7441b0fd7f28142a8cad073dbca"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.156802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"54d6ea8b6a911f8ce91e71ee4a2848ae6c8a5b2ddedf3fc0640aafaa3a6480e7"} Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.201599 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.235278 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.271517 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.305322 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.313443 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.326532 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tshhr"] Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.341980 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.365103 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.386606 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.403985 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.429615 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e81100-8761-4e5f-bab6-07df1c795ccb" path="/var/lib/kubelet/pods/59e81100-8761-4e5f-bab6-07df1c795ccb/volumes" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.439638 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.440433 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440464 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} err="failed to get container status \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440488 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.440775 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440798 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} err="failed to get container status \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.440811 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.441099 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.441118 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} err="failed to get container status \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.441130 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.444852 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.444884 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} err="failed to get container status \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.444901 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.445305 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445379 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} err="failed to get container status \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445416 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.445908 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.445961 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} err="failed to get container status \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.446008 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.449187 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449215 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} err="failed to get container status \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449231 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.449577 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449604 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} err="failed to get container status \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.449622 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: E0216 15:03:10.450139 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450161 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} err="failed to get container status \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450176 4705 scope.go:117] "RemoveContainer" containerID="38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450758 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f"} err="failed to get container status \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": rpc error: code = NotFound desc = could not find container \"38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f\": container with ID starting with 38015c88fb5e323b7d5aa4cc888d61d5c59624385a3b7518da2d480d6bb1018f not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.450782 4705 scope.go:117] "RemoveContainer" containerID="f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.451039 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0"} err="failed to get container status \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": rpc error: code = NotFound desc = could not find container \"f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0\": container with ID starting with f1328c83a902e3515aa25f5c421ce20bc61bdc72ff7ff6a8c114ff6ac9c60bf0 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.451063 4705 scope.go:117] "RemoveContainer" containerID="3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452295 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1"} err="failed to get container status \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": rpc error: code = NotFound desc = could not find container \"3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1\": container with ID starting with 3c628cd0540704c75b695f448a633988ebd571e155127300a79148fcd56e3ea1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452322 4705 scope.go:117] "RemoveContainer" containerID="ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452874 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88"} err="failed to get container status \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": rpc error: code = NotFound desc = could not find container \"ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88\": container with ID starting with ca5662cb887f1191e8dc711fe9121436d708b46bc86a82838e89ecd1ca34da88 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.452891 4705 scope.go:117] "RemoveContainer" containerID="9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02"} err="failed to get container status \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": rpc error: code = NotFound desc = could not find container \"9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02\": container with ID starting with 9cb42feb32d8578452e5cb558312f0d27358056856cb62db490192b991387e02 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453171 4705 scope.go:117] "RemoveContainer" containerID="b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453593 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf"} err="failed to get container status \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": rpc error: code = NotFound desc = could not find container \"b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf\": container with ID starting with b85ad9a27922df4e271a8bf991a59fd9f41e534e4400d0361e9ee3d9715aefcf not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.453648 4705 scope.go:117] "RemoveContainer" containerID="7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.454053 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1"} err="failed to get container status \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": rpc error: code = NotFound desc = could not find container \"7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1\": container with ID starting with 7806972b2fb33722dcbc5b0a541b7e7fe85cfdc2f9c4972e9ea566535603f4b1 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.454101 4705 scope.go:117] "RemoveContainer" containerID="8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.455722 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4"} err="failed to get container status \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": rpc error: code = NotFound desc = could not find container \"8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4\": container with ID starting with 8d1f7e2cd09ee985bfea04764dafebb4b8ef647b36f0442592de7759b535e8b4 not found: ID does not exist" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.455760 4705 scope.go:117] "RemoveContainer" containerID="429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff" Feb 16 15:03:10 crc kubenswrapper[4705]: I0216 15:03:10.456073 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff"} err="failed to get container status \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": rpc error: code = NotFound desc = could not find container \"429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff\": container with ID starting with 429144d20d8199314cce9ebcc62d0adfd394864644bbdaf2eec46c82648375ff not found: ID does not exist" Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.165830 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"95039a12e6b758bdbc6f4a8e014a8d1561a5920d131ca658b288cc3ad6d9911d"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166395 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"bf27d8e3d3c90b79f3ad11747ba3df25378ba91839964917fb4213e922deb5d9"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"64e0709dc3d0b164095a7b3bd49d3c5ba3b65a453a0459d1dcb913d2802e63b4"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166419 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"b0d4af810793c8f3b7c153ed399cb1e9fbb2b22f6af363011235403128f39352"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"1bb2013977fdc5560ac4027cfd7c3cb8222455e312e4a8d6d71fa8ac71bb11ea"} Feb 16 15:03:11 crc kubenswrapper[4705]: I0216 15:03:11.166438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"aa84d4f9b1385c793207e1e5d810609b143dd91a6dfd00c180575a132636b3a4"} Feb 16 15:03:14 crc kubenswrapper[4705]: I0216 15:03:14.190080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"f1ebf74cabced645dcbd8c68f2343636812faf0735da7e5bb7423c97c116faac"} Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.666704 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.667919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.669888 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.670320 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-5cct8" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.670459 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.715064 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.716061 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.720079 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-94grs" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.720096 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.732256 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.735577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.789061 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.890825 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.890929 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891009 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.891784 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.894461 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-h9bq9" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.894953 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.918638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcrn\" (UniqueName: \"kubernetes.io/projected/59894fc4-090e-4e57-84d9-c6fdbe5f3ceb-kube-api-access-8bcrn\") pod \"obo-prometheus-operator-68bc856cb9-f8kwg\" (UID: \"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:15 crc kubenswrapper[4705]: I0216 15:03:15.985919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002802 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002874 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002934 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002953 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.002976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.007791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.008139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81328a1c-32d6-4ce6-9139-8418d2e8fa52-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh\" (UID: \"81328a1c-32d6-4ce6-9139-8418d2e8fa52\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.008312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015743 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015837 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015890 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.015945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(6fd9edad8fc683408a693b6a86b54dbf99db7a834617bab4c0844865cad277fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.021816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90dedac-68bb-409d-9860-af59c6c7d172-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl\" (UID: \"b90dedac-68bb-409d-9860-af59c6c7d172\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.030732 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.050214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071678 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071780 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071820 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.071893 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(d27a9eabb91c01ac9a4b9c218328a9723b03ab77c8fbe17e9f4f6ac4afef72e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.081984 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082080 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082109 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.082171 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(146e861c38ba3a37f5789bce8191711445cb89997d8f4d3cc341172a8d99f657): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.095241 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.096144 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.098395 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-r75dd" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104246 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.104478 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.108919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5510c272-cd32-4850-a9fa-daff2e045b92-observability-operator-tls\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.122674 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c77sv\" (UniqueName: \"kubernetes.io/projected/5510c272-cd32-4850-a9fa-daff2e045b92-kube-api-access-c77sv\") pod \"observability-operator-59bdc8b94-l2rxp\" (UID: \"5510c272-cd32-4850-a9fa-daff2e045b92\") " pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.205297 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.206104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.206172 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.207197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8acc36de-d26d-44cd-bad6-d31f0a4a4520-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.211923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" event={"ID":"fc67360e-7dc8-4772-bc68-60709d7e4e31","Type":"ContainerStarted","Data":"021e6d99f83ee98863067d674a51fd9d911769ff59e5de6efe7131658cb81c64"} Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.212525 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.212579 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.225841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26pw\" (UniqueName: \"kubernetes.io/projected/8acc36de-d26d-44cd-bad6-d31f0a4a4520-kube-api-access-g26pw\") pod \"perses-operator-5bf474d74f-tqj56\" (UID: \"8acc36de-d26d-44cd-bad6-d31f0a4a4520\") " pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242613 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242710 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242740 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.242798 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(8359fb1ecbff5cbf7e8ebf74bc383be3e722d8993aac854d5da38e1e6e37331d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.251167 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" podStartSLOduration=7.251145568 podStartE2EDuration="7.251145568s" podCreationTimestamp="2026-02-16 15:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:03:16.248497344 +0000 UTC m=+590.433474420" watchObservedRunningTime="2026-02-16 15:03:16.251145568 +0000 UTC m=+590.436122644" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.261806 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.414300 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476503 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476626 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476660 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.476743 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(ffd4dfaf7c9f0e1ccda011d562acabe25c1a072f7baeddfc2b0aeb69d449de86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.605696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.610993 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.611146 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.611539 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638572 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638674 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638700 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.638753 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(d3ae48184c7402219402b5695969de376acf36a650ba50356bcd0141ce65adbb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.647615 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654052 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654194 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.654794 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.665146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.682694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: I0216 15:03:16.683305 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688560 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688652 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688685 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.688745 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(6c276d2d6aec0bcd8cf33539961024cb6820e39d0bd3dfd6ddb99ecaf1cb5286): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711211 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711305 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711330 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:16 crc kubenswrapper[4705]: E0216 15:03:16.711438 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(b26f2ec2d1b7d5bd31a64e3a9f539258d219637affb1596bb9420afefb24cb17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219135 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.219694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.220096 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: I0216 15:03:17.298302 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314697 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314885 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.314957 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.315065 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(07ab3988b1f93cbaac04bb24e856cb38e9858a3d316b03119e50387f72148310): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319255 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319343 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319383 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:17 crc kubenswrapper[4705]: E0216 15:03:17.319444 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(b2bc31f6aeab4dba718787b3310a33fba5125b458b582fab3f375fbea73b4822): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:24 crc kubenswrapper[4705]: I0216 15:03:24.420406 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:24 crc kubenswrapper[4705]: E0216 15:03:24.421145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2ljf7_openshift-multus(0ec06562-0237-4709-9469-033783d7d545)\"" pod="openshift-multus/multus-2ljf7" podUID="0ec06562-0237-4709-9469-033783d7d545" Feb 16 15:03:26 crc kubenswrapper[4705]: I0216 15:03:26.782419 4705 scope.go:117] "RemoveContainer" containerID="5ba52b7047a4bed388cbfd455b1ec058a60b989e6041232ddaab6b24cae29873" Feb 16 15:03:28 crc kubenswrapper[4705]: I0216 15:03:28.418596 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: I0216 15:03:28.419525 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457032 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457137 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457160 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:28 crc kubenswrapper[4705]: E0216 15:03:28.457221 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators(59894fc4-090e-4e57-84d9-c6fdbe5f3ceb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-f8kwg_openshift-operators_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb_0(db5022999eab3676825c45a06aa59a7061eb4a083c41e8310f9f7c6db5928b1b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podUID="59894fc4-090e-4e57-84d9-c6fdbe5f3ceb" Feb 16 15:03:30 crc kubenswrapper[4705]: I0216 15:03:30.419379 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: I0216 15:03:30.420319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448725 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448816 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448840 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:30 crc kubenswrapper[4705]: E0216 15:03:30.448892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-l2rxp_openshift-operators(5510c272-cd32-4850-a9fa-daff2e045b92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-l2rxp_openshift-operators_5510c272-cd32-4850-a9fa-daff2e045b92_0(0002ce38cfb1589f22aace34da09557c1da26847f7e289c8b8f6ce927eef45cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podUID="5510c272-cd32-4850-a9fa-daff2e045b92" Feb 16 15:03:31 crc kubenswrapper[4705]: I0216 15:03:31.419029 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: I0216 15:03:31.420293 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459697 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459827 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459872 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:31 crc kubenswrapper[4705]: E0216 15:03:31.459966 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators(81328a1c-32d6-4ce6-9139-8418d2e8fa52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_openshift-operators_81328a1c-32d6-4ce6-9139-8418d2e8fa52_0(8da0aae07e7a4ef868d8d40e2889f16223628d8ce46021acb82de4ae6c6f2574): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podUID="81328a1c-32d6-4ce6-9139-8418d2e8fa52" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419042 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419178 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.419646 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: I0216 15:03:32.420069 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.489968 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490050 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490078 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.490134 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tqj56_openshift-operators(8acc36de-d26d-44cd-bad6-d31f0a4a4520)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tqj56_openshift-operators_8acc36de-d26d-44cd-bad6-d31f0a4a4520_0(730a1ff2c35e8d5a5858bc5dfd8da70cb39f85ad7371af6b4cb00225b215c4e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podUID="8acc36de-d26d-44cd-bad6-d31f0a4a4520" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502610 4705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502723 4705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502756 4705 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:32 crc kubenswrapper[4705]: E0216 15:03:32.502836 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators(b90dedac-68bb-409d-9860-af59c6c7d172)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_openshift-operators_b90dedac-68bb-409d-9860-af59c6c7d172_0(76afef5f433e7cdc9a3ba89427ecfea419a001036a905934117b14969712d094): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podUID="b90dedac-68bb-409d-9860-af59c6c7d172" Feb 16 15:03:38 crc kubenswrapper[4705]: I0216 15:03:38.419629 4705 scope.go:117] "RemoveContainer" containerID="c280e78eb2bfe3800a24e6f07f41b296d367a8891b813aff9f9aa9e3820570f6" Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.363703 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2ljf7_0ec06562-0237-4709-9469-033783d7d545/kube-multus/2.log" Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.364139 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2ljf7" event={"ID":"0ec06562-0237-4709-9469-033783d7d545","Type":"ContainerStarted","Data":"fd3158954c0966f76c5348ec79ca5afd950e93895e4999b2dd8f3c5211948c15"} Feb 16 15:03:39 crc kubenswrapper[4705]: I0216 15:03:39.987615 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-drlsg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.418713 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.419595 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" Feb 16 15:03:41 crc kubenswrapper[4705]: I0216 15:03:41.869312 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg"] Feb 16 15:03:41 crc kubenswrapper[4705]: W0216 15:03:41.879011 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59894fc4_090e_4e57_84d9_c6fdbe5f3ceb.slice/crio-8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a WatchSource:0}: Error finding container 8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a: Status 404 returned error can't find the container with id 8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a Feb 16 15:03:42 crc kubenswrapper[4705]: I0216 15:03:42.385887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" event={"ID":"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb","Type":"ContainerStarted","Data":"8ebdbe7cb28f5d1806a0d8d0a1f59b109a9b92c83792056bc6ec89cd00d9540a"} Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.418845 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.419702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" Feb 16 15:03:43 crc kubenswrapper[4705]: I0216 15:03:43.690136 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl"] Feb 16 15:03:43 crc kubenswrapper[4705]: W0216 15:03:43.705690 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90dedac_68bb_409d_9860_af59c6c7d172.slice/crio-37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4 WatchSource:0}: Error finding container 37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4: Status 404 returned error can't find the container with id 37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4 Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.404541 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" event={"ID":"b90dedac-68bb-409d-9860-af59c6c7d172","Type":"ContainerStarted","Data":"37e5624f5618c90fdcb9f48f6c7ce91de9d64c3bcb1ec60c8b2348483fd9c2e4"} Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.419063 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:44 crc kubenswrapper[4705]: I0216 15:03:44.419923 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" Feb 16 15:03:45 crc kubenswrapper[4705]: I0216 15:03:45.431797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:45 crc kubenswrapper[4705]: I0216 15:03:45.435150 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:46 crc kubenswrapper[4705]: I0216 15:03:46.394915 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh"] Feb 16 15:03:46 crc kubenswrapper[4705]: W0216 15:03:46.626421 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81328a1c_32d6_4ce6_9139_8418d2e8fa52.slice/crio-ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5 WatchSource:0}: Error finding container ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5: Status 404 returned error can't find the container with id ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5 Feb 16 15:03:46 crc kubenswrapper[4705]: I0216 15:03:46.830629 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-l2rxp"] Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.418716 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.419583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.435464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" event={"ID":"81328a1c-32d6-4ce6-9139-8418d2e8fa52","Type":"ContainerStarted","Data":"ed552d0d8be2e1250b806c969d18ce7932a76992c41c9dab5129806158029ad5"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.440589 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" event={"ID":"5510c272-cd32-4850-a9fa-daff2e045b92","Type":"ContainerStarted","Data":"32433cc64397b2492b2807c1ff47c03a3a3212494a85bdb06cb8b013277e21cd"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.443517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" event={"ID":"b90dedac-68bb-409d-9860-af59c6c7d172","Type":"ContainerStarted","Data":"54aa794afbb8498da64d8b821fca306f4a783efc08888ed3cd08a7c8f1133617"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.448802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" event={"ID":"59894fc4-090e-4e57-84d9-c6fdbe5f3ceb","Type":"ContainerStarted","Data":"f0333f19bc32e9d1033d8965933ee4967ba988003a995356f91685bb2f376c90"} Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.487314 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-lthbl" podStartSLOduration=29.498879018 podStartE2EDuration="32.487286223s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:43.710111726 +0000 UTC m=+617.895088812" lastFinishedPulling="2026-02-16 15:03:46.698518911 +0000 UTC m=+620.883496017" observedRunningTime="2026-02-16 15:03:47.466080599 +0000 UTC m=+621.651057715" watchObservedRunningTime="2026-02-16 15:03:47.487286223 +0000 UTC m=+621.672263329" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.760485 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-f8kwg" podStartSLOduration=28.007618422 podStartE2EDuration="32.760459985s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:41.882744573 +0000 UTC m=+616.067721649" lastFinishedPulling="2026-02-16 15:03:46.635586136 +0000 UTC m=+620.820563212" observedRunningTime="2026-02-16 15:03:47.510834334 +0000 UTC m=+621.695811420" watchObservedRunningTime="2026-02-16 15:03:47.760459985 +0000 UTC m=+621.945437061" Feb 16 15:03:47 crc kubenswrapper[4705]: I0216 15:03:47.776898 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqj56"] Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.456868 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" event={"ID":"8acc36de-d26d-44cd-bad6-d31f0a4a4520","Type":"ContainerStarted","Data":"27519a0110f1f01b3d8b6a5d5886fafe408d9dc7427a86d52a91244ae4b6fa4a"} Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.461815 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" event={"ID":"81328a1c-32d6-4ce6-9139-8418d2e8fa52","Type":"ContainerStarted","Data":"bcfa20245fc1ebf1f3aec8c87d879ec7e94c99f98deafe4047172d958eb1aeab"} Feb 16 15:03:48 crc kubenswrapper[4705]: I0216 15:03:48.490534 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75758f868f-gkzfh" podStartSLOduration=32.744618451 podStartE2EDuration="33.490508181s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:46.633865287 +0000 UTC m=+620.818842363" lastFinishedPulling="2026-02-16 15:03:47.379754987 +0000 UTC m=+621.564732093" observedRunningTime="2026-02-16 15:03:48.486518599 +0000 UTC m=+622.671495675" watchObservedRunningTime="2026-02-16 15:03:48.490508181 +0000 UTC m=+622.675485267" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.513227 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" event={"ID":"5510c272-cd32-4850-a9fa-daff2e045b92","Type":"ContainerStarted","Data":"8bb26ab6bfe59d817fad1bb9d57dcc847eecd3dae6f415211e9e8d4d90b0d0c5"} Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.514852 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.516737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" event={"ID":"8acc36de-d26d-44cd-bad6-d31f0a4a4520","Type":"ContainerStarted","Data":"2bd79f14ae36d970841bd2e127f046ae2d5524516b0344d8522ee17390ef42c2"} Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.516902 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.527956 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.544154 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-l2rxp" podStartSLOduration=32.626454217 podStartE2EDuration="38.544136189s" podCreationTimestamp="2026-02-16 15:03:15 +0000 UTC" firstStartedPulling="2026-02-16 15:03:46.845598976 +0000 UTC m=+621.030576052" lastFinishedPulling="2026-02-16 15:03:52.763280948 +0000 UTC m=+626.948258024" observedRunningTime="2026-02-16 15:03:53.539799477 +0000 UTC m=+627.724776563" watchObservedRunningTime="2026-02-16 15:03:53.544136189 +0000 UTC m=+627.729113265" Feb 16 15:03:53 crc kubenswrapper[4705]: I0216 15:03:53.573910 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" podStartSLOduration=32.614310463 podStartE2EDuration="37.573887743s" podCreationTimestamp="2026-02-16 15:03:16 +0000 UTC" firstStartedPulling="2026-02-16 15:03:47.786990109 +0000 UTC m=+621.971967185" lastFinishedPulling="2026-02-16 15:03:52.746567389 +0000 UTC m=+626.931544465" observedRunningTime="2026-02-16 15:03:53.56629571 +0000 UTC m=+627.751272846" watchObservedRunningTime="2026-02-16 15:03:53.573887743 +0000 UTC m=+627.758864809" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.645269 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.647153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650327 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650501 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9wjm5" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.650559 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.655686 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.664976 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.672565 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.675266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mn2f8" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.681700 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.683286 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.685177 4705 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nd789" Feb 16 15:04:00 crc kubenswrapper[4705]: I0216 15:04:00.686165 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615297 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.615366 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.665319 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.679360 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27vqb\" (UniqueName: \"kubernetes.io/projected/ca614a32-6a4c-4802-8cb5-a927aac7a59a-kube-api-access-27vqb\") pod \"cert-manager-cainjector-cf98fcc89-txcpz\" (UID: \"ca614a32-6a4c-4802-8cb5-a927aac7a59a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.687499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.716916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.717050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.740535 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n29jj\" (UniqueName: \"kubernetes.io/projected/fc1f84cc-974e-42c8-8b49-120dfe74aa0f-kube-api-access-n29jj\") pod \"cert-manager-webhook-687f57d79b-mdqgz\" (UID: \"fc1f84cc-974e-42c8-8b49-120dfe74aa0f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.740844 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8fb\" (UniqueName: \"kubernetes.io/projected/b6695119-142b-40cb-bdd8-e0e1f55e0e61-kube-api-access-7f8fb\") pod \"cert-manager-858654f9db-46spv\" (UID: \"b6695119-142b-40cb-bdd8-e0e1f55e0e61\") " pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.876860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.889736 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-46spv" Feb 16 15:04:01 crc kubenswrapper[4705]: I0216 15:04:01.895955 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.387373 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-txcpz"] Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.437173 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-46spv"] Feb 16 15:04:02 crc kubenswrapper[4705]: W0216 15:04:02.454765 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc1f84cc_974e_42c8_8b49_120dfe74aa0f.slice/crio-d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6 WatchSource:0}: Error finding container d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6: Status 404 returned error can't find the container with id d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6 Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.455556 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mdqgz"] Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.675176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-46spv" event={"ID":"b6695119-142b-40cb-bdd8-e0e1f55e0e61","Type":"ContainerStarted","Data":"5b7f181475f17306b492564d008685798ca79631f1970db129f9d36580874bf4"} Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.679216 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" event={"ID":"ca614a32-6a4c-4802-8cb5-a927aac7a59a","Type":"ContainerStarted","Data":"99ca1bcc126be53996c5380e0dce62da80c4ec330c0e0c5641497bcd317fd910"} Feb 16 15:04:02 crc kubenswrapper[4705]: I0216 15:04:02.680489 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" event={"ID":"fc1f84cc-974e-42c8-8b49-120dfe74aa0f","Type":"ContainerStarted","Data":"d35392c021935fcb20be18abb0c2085c49953bb7373808bcfaa5be8bdcc5f2c6"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.417404 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tqj56" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.718447 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-46spv" event={"ID":"b6695119-142b-40cb-bdd8-e0e1f55e0e61","Type":"ContainerStarted","Data":"4545222601f9f06cd26254dbf52fe9f2e960e72f003261b59875146b1cbb42a7"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.720612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" event={"ID":"ca614a32-6a4c-4802-8cb5-a927aac7a59a","Type":"ContainerStarted","Data":"337e968ec4e10aa825e3261df4185ac89feb16f9c242af4eff79221d0637b53f"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.722185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" event={"ID":"fc1f84cc-974e-42c8-8b49-120dfe74aa0f","Type":"ContainerStarted","Data":"f295e69cc8831f0062d92f5967ad40485e3a1d75ed48166739c5d90c37f0aedc"} Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.722331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.739408 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-46spv" podStartSLOduration=3.044082255 podStartE2EDuration="6.739367506s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.442544814 +0000 UTC m=+636.627521880" lastFinishedPulling="2026-02-16 15:04:06.137830055 +0000 UTC m=+640.322807131" observedRunningTime="2026-02-16 15:04:06.737602017 +0000 UTC m=+640.922579093" watchObservedRunningTime="2026-02-16 15:04:06.739367506 +0000 UTC m=+640.924344572" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.777436 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" podStartSLOduration=3.017783617 podStartE2EDuration="6.777410922s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.45843742 +0000 UTC m=+636.643414496" lastFinishedPulling="2026-02-16 15:04:06.218064725 +0000 UTC m=+640.403041801" observedRunningTime="2026-02-16 15:04:06.760601911 +0000 UTC m=+640.945578997" watchObservedRunningTime="2026-02-16 15:04:06.777410922 +0000 UTC m=+640.962387998" Feb 16 15:04:06 crc kubenswrapper[4705]: I0216 15:04:06.781337 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-txcpz" podStartSLOduration=3.03964357 podStartE2EDuration="6.781324132s" podCreationTimestamp="2026-02-16 15:04:00 +0000 UTC" firstStartedPulling="2026-02-16 15:04:02.39356477 +0000 UTC m=+636.578541846" lastFinishedPulling="2026-02-16 15:04:06.135245332 +0000 UTC m=+640.320222408" observedRunningTime="2026-02-16 15:04:06.775346344 +0000 UTC m=+640.960323430" watchObservedRunningTime="2026-02-16 15:04:06.781324132 +0000 UTC m=+640.966301208" Feb 16 15:04:11 crc kubenswrapper[4705]: I0216 15:04:11.900610 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mdqgz" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.621405 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.625227 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.627991 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.650315 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.709989 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811774 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811848 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.811884 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.812332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.812714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.835030 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.951795 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.995708 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:34 crc kubenswrapper[4705]: I0216 15:04:34.997849 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.013774 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.119675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225257 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.225370 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.226036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.226260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.255056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.305627 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd"] Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.388768 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.823235 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng"] Feb 16 15:04:35 crc kubenswrapper[4705]: W0216 15:04:35.827024 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1187d92_0ea8_46f2_9784_ddea0852aa5f.slice/crio-15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3 WatchSource:0}: Error finding container 15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3: Status 404 returned error can't find the container with id 15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3 Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.948822 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerStarted","Data":"15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3"} Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951441 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="4263c18fa3994bb3a2cb96b7de43e5a88c3cdce9347f094c3adb74ac109bd8f7" exitCode=0 Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"4263c18fa3994bb3a2cb96b7de43e5a88c3cdce9347f094c3adb74ac109bd8f7"} Feb 16 15:04:35 crc kubenswrapper[4705]: I0216 15:04:35.951565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerStarted","Data":"81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676"} Feb 16 15:04:36 crc kubenswrapper[4705]: I0216 15:04:36.975212 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="8db8c4258cc92e68c0a4e62af157ff619a4ff3a159d989daf517b07de4a4941a" exitCode=0 Feb 16 15:04:36 crc kubenswrapper[4705]: I0216 15:04:36.975639 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"8db8c4258cc92e68c0a4e62af157ff619a4ff3a159d989daf517b07de4a4941a"} Feb 16 15:04:37 crc kubenswrapper[4705]: I0216 15:04:37.986068 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="ca29647308473811807d28967d66117cc8ee0b3e41a9fe4539d2f4b6eee494b2" exitCode=0 Feb 16 15:04:37 crc kubenswrapper[4705]: I0216 15:04:37.986157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"ca29647308473811807d28967d66117cc8ee0b3e41a9fe4539d2f4b6eee494b2"} Feb 16 15:04:38 crc kubenswrapper[4705]: I0216 15:04:38.996938 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="43b0ce933e00fc3cdad93d5e9cd92a0063cfa5f531c6ee046a18569e7fdc3778" exitCode=0 Feb 16 15:04:38 crc kubenswrapper[4705]: I0216 15:04:38.997414 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"43b0ce933e00fc3cdad93d5e9cd92a0063cfa5f531c6ee046a18569e7fdc3778"} Feb 16 15:04:39 crc kubenswrapper[4705]: I0216 15:04:39.002653 4705 generic.go:334] "Generic (PLEG): container finished" podID="8035ad9d-50ca-4849-aefe-f1251588793d" containerID="f874d09a5619ee0aa8c5f7b601d48b8aee377a4d3ab31c3d506f349dcbb4dca4" exitCode=0 Feb 16 15:04:39 crc kubenswrapper[4705]: I0216 15:04:39.002713 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"f874d09a5619ee0aa8c5f7b601d48b8aee377a4d3ab31c3d506f349dcbb4dca4"} Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.013870 4705 generic.go:334] "Generic (PLEG): container finished" podID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerID="7b4d0aa930b07d0887b5bd246f294f7ffaa52f9ad69d88f75940c3fac48b22e4" exitCode=0 Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.014585 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"7b4d0aa930b07d0887b5bd246f294f7ffaa52f9ad69d88f75940c3fac48b22e4"} Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.351843 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438231 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438342 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.438477 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") pod \"8035ad9d-50ca-4849-aefe-f1251588793d\" (UID: \"8035ad9d-50ca-4849-aefe-f1251588793d\") " Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.439690 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle" (OuterVolumeSpecName: "bundle") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.448567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs" (OuterVolumeSpecName: "kube-api-access-582gs") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "kube-api-access-582gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.457803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util" (OuterVolumeSpecName: "util") pod "8035ad9d-50ca-4849-aefe-f1251588793d" (UID: "8035ad9d-50ca-4849-aefe-f1251588793d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541286 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541341 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8035ad9d-50ca-4849-aefe-f1251588793d-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:40 crc kubenswrapper[4705]: I0216 15:04:40.541357 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-582gs\" (UniqueName: \"kubernetes.io/projected/8035ad9d-50ca-4849-aefe-f1251588793d-kube-api-access-582gs\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" event={"ID":"8035ad9d-50ca-4849-aefe-f1251588793d","Type":"ContainerDied","Data":"81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676"} Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026434 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81fb9520542d9111b0945dfb00863fab30055efbd2c5fc9f4f5bf7565f8f6676" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.026470 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.354421 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455265 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455398 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.455499 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") pod \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\" (UID: \"c1187d92-0ea8-46f2-9784-ddea0852aa5f\") " Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.457489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle" (OuterVolumeSpecName: "bundle") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.460578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl" (OuterVolumeSpecName: "kube-api-access-zsdfl") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "kube-api-access-zsdfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.471558 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util" (OuterVolumeSpecName: "util") pod "c1187d92-0ea8-46f2-9784-ddea0852aa5f" (UID: "c1187d92-0ea8-46f2-9784-ddea0852aa5f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558757 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558834 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c1187d92-0ea8-46f2-9784-ddea0852aa5f-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:41 crc kubenswrapper[4705]: I0216 15:04:41.558857 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/c1187d92-0ea8-46f2-9784-ddea0852aa5f-kube-api-access-zsdfl\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" event={"ID":"c1187d92-0ea8-46f2-9784-ddea0852aa5f","Type":"ContainerDied","Data":"15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3"} Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035693 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e4fdf897b170831a333f8b493243c4d6b84cca48b0b72e5e0bc5e4f06fb9c3" Feb 16 15:04:42 crc kubenswrapper[4705]: I0216 15:04:42.035744 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.747633 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748510 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748526 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748539 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748546 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748557 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748564 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="util" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748577 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748584 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748596 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748602 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="pull" Feb 16 15:04:50 crc kubenswrapper[4705]: E0216 15:04:50.748619 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748635 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748777 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1187d92-0ea8-46f2-9784-ddea0852aa5f" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.748795 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8035ad9d-50ca-4849-aefe-f1251588793d" containerName="extract" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.749538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.751080 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.752564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.752900 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753052 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753174 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-sjnz6" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.753506 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.765559 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909925 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:50 crc kubenswrapper[4705]: I0216 15:04:50.909947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011419 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011561 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.011585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.012530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e0f8cfad-0639-40d4-8a2c-832935b8cddc-manager-config\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.017301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-apiservice-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.020966 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-webhook-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.025034 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0f8cfad-0639-40d4-8a2c-832935b8cddc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.046305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgpb\" (UniqueName: \"kubernetes.io/projected/e0f8cfad-0639-40d4-8a2c-832935b8cddc-kube-api-access-pzgpb\") pod \"loki-operator-controller-manager-6b7769c4bd-hnqwn\" (UID: \"e0f8cfad-0639-40d4-8a2c-832935b8cddc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.066937 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:04:51 crc kubenswrapper[4705]: I0216 15:04:51.555484 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn"] Feb 16 15:04:52 crc kubenswrapper[4705]: I0216 15:04:52.118515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"7bd1552fa2d85fafc6c2973bcbde5a096f7b8ec9bda8c7925dabbf9774def2ff"} Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.825414 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.827424 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.829984 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.830308 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-cfqgz" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.830458 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.843402 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:56 crc kubenswrapper[4705]: I0216 15:04:56.927802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.029909 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.050896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8n2\" (UniqueName: \"kubernetes.io/projected/0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9-kube-api-access-8g8n2\") pod \"cluster-logging-operator-c769fd969-9x6cn\" (UID: \"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.144840 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.161165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"0e84ca013ecd33f8ae86af8ed8895a2fd615863534e538c0b29af4c75f33733e"} Feb 16 15:04:57 crc kubenswrapper[4705]: I0216 15:04:57.587252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9x6cn"] Feb 16 15:04:58 crc kubenswrapper[4705]: I0216 15:04:58.170858 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" event={"ID":"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9","Type":"ContainerStarted","Data":"30c80eb188a0efbcb14073af18fc2c2116d55b33c31058db857dbf3f2c23d1ee"} Feb 16 15:05:01 crc kubenswrapper[4705]: I0216 15:05:01.684835 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:05:01 crc kubenswrapper[4705]: I0216 15:05:01.685179 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.250760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" event={"ID":"e0f8cfad-0639-40d4-8a2c-832935b8cddc","Type":"ContainerStarted","Data":"ecf3866f8c6a9cba0642e4e7162243f23c560e99bcf384e3953c330cf4a73284"} Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.252874 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.253537 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.253662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" event={"ID":"0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9","Type":"ContainerStarted","Data":"edbf86ced194dcd9b2596b8532b9883b1f36f25e9748d36d7a1990702f108154"} Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.286135 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6b7769c4bd-hnqwn" podStartSLOduration=1.888619209 podStartE2EDuration="16.286097133s" podCreationTimestamp="2026-02-16 15:04:50 +0000 UTC" firstStartedPulling="2026-02-16 15:04:51.571732862 +0000 UTC m=+685.756709938" lastFinishedPulling="2026-02-16 15:05:05.969210786 +0000 UTC m=+700.154187862" observedRunningTime="2026-02-16 15:05:06.27742309 +0000 UTC m=+700.462400186" watchObservedRunningTime="2026-02-16 15:05:06.286097133 +0000 UTC m=+700.471074209" Feb 16 15:05:06 crc kubenswrapper[4705]: I0216 15:05:06.345748 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-9x6cn" podStartSLOduration=1.94887196 podStartE2EDuration="10.345715805s" podCreationTimestamp="2026-02-16 15:04:56 +0000 UTC" firstStartedPulling="2026-02-16 15:04:57.598952626 +0000 UTC m=+691.783929702" lastFinishedPulling="2026-02-16 15:05:05.995796481 +0000 UTC m=+700.180773547" observedRunningTime="2026-02-16 15:05:06.339027828 +0000 UTC m=+700.524004924" watchObservedRunningTime="2026-02-16 15:05:06.345715805 +0000 UTC m=+700.530692881" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.717853 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.719441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.721868 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.722185 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.728803 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.818625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.818810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.920683 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.921134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.934782 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.935011 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a77f1d7d0ca7e926c7c2bebcbec44eb37e90e66a58abd69d693dca1682a22d00/globalmount\"" pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.958611 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwn6\" (UniqueName: \"kubernetes.io/projected/cc3a618d-0da6-49be-a4bc-3e3166db35e8-kube-api-access-sgwn6\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:12 crc kubenswrapper[4705]: I0216 15:05:12.987837 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25572db7-8858-4e20-bc72-a3fc6adaeddc\") pod \"minio\" (UID: \"cc3a618d-0da6-49be-a4bc-3e3166db35e8\") " pod="minio-dev/minio" Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.054074 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.277453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:05:13 crc kubenswrapper[4705]: I0216 15:05:13.304331 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cc3a618d-0da6-49be-a4bc-3e3166db35e8","Type":"ContainerStarted","Data":"aa6b9dbab1f0465af37cc4b896964331d66780c0758a80d67ca96d04dc8d190a"} Feb 16 15:05:17 crc kubenswrapper[4705]: I0216 15:05:17.341780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cc3a618d-0da6-49be-a4bc-3e3166db35e8","Type":"ContainerStarted","Data":"378a7e88f34a51eaa2c0fb8bb3936de544e12a1c0bf3c9e2c5eb3ce8ced6f2ba"} Feb 16 15:05:17 crc kubenswrapper[4705]: I0216 15:05:17.369465 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.634906335 podStartE2EDuration="7.369430317s" podCreationTimestamp="2026-02-16 15:05:10 +0000 UTC" firstStartedPulling="2026-02-16 15:05:13.290502906 +0000 UTC m=+707.475479982" lastFinishedPulling="2026-02-16 15:05:17.025026888 +0000 UTC m=+711.210003964" observedRunningTime="2026-02-16 15:05:17.357189774 +0000 UTC m=+711.542166870" watchObservedRunningTime="2026-02-16 15:05:17.369430317 +0000 UTC m=+711.554407433" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.484177 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.502800 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.508387 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516187 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516595 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.516683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-z98xc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.517542 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.623986 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624067 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.624266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.654317 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.655611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658261 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658545 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.658690 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.678511 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726858 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726936 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.726970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727003 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727066 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727130 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.727258 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.728760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.729513 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb0e04c-e741-4dbe-8c09-94379b736809-config\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.743951 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.744029 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/feb0e04c-e741-4dbe-8c09-94379b736809-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.769178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqxbf\" (UniqueName: \"kubernetes.io/projected/feb0e04c-e741-4dbe-8c09-94379b736809-kube-api-access-bqxbf\") pod \"logging-loki-distributor-5d5548c9f5-s8kg2\" (UID: \"feb0e04c-e741-4dbe-8c09-94379b736809\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.804674 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.806514 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.813576 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.815927 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.816177 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834411 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834527 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834550 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.834616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.835793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.836821 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd10ec10-e122-430f-afaf-b0b8222a6b15-config\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.846412 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.847916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.849044 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.849131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/dd10ec10-e122-430f-afaf-b0b8222a6b15-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.871869 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qs6s\" (UniqueName: \"kubernetes.io/projected/dd10ec10-e122-430f-afaf-b0b8222a6b15-kube-api-access-2qs6s\") pod \"logging-loki-querier-76bf7b6d45-rbcrd\" (UID: \"dd10ec10-e122-430f-afaf-b0b8222a6b15\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.913658 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.917611 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.925830 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926098 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926276 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.926722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.937917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.938000 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.938125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.945757 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.955422 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.964814 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.967904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.970324 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-lvbwz" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.995998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:22 crc kubenswrapper[4705]: I0216 15:05:22.997965 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.044574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045182 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045482 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045559 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045603 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.045971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046299 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046506 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.046773 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.047082 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.047912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-config\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.051239 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.051308 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.065329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp5f\" (UniqueName: \"kubernetes.io/projected/8e2f02fa-7b78-49ef-8c1a-f9cf7387e063-kube-api-access-pkp5f\") pod \"logging-loki-query-frontend-6d6859c548-mbwk8\" (UID: \"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148836 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148861 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.148984 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149009 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149077 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149190 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.149250 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.150792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.150950 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.151329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-rbac\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.151559 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-lokistack-gateway\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152156 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152427 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.152933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-ca-bundle\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.153198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d1223933-4ce9-41dd-9c8a-14a59b540e20-rbac\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.156152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.157013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tls-secret\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.157032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.161894 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-tenants\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.169417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d1223933-4ce9-41dd-9c8a-14a59b540e20-tenants\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.170099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a85ad7e0-59d0-412d-96e1-298020ef9927-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.172457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfgx\" (UniqueName: \"kubernetes.io/projected/d1223933-4ce9-41dd-9c8a-14a59b540e20-kube-api-access-4cfgx\") pod \"logging-loki-gateway-84f4bcb569-zxt7t\" (UID: \"d1223933-4ce9-41dd-9c8a-14a59b540e20\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.180232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48llb\" (UniqueName: \"kubernetes.io/projected/a85ad7e0-59d0-412d-96e1-298020ef9927-kube-api-access-48llb\") pod \"logging-loki-gateway-84f4bcb569-mzgch\" (UID: \"a85ad7e0-59d0-412d-96e1-298020ef9927\") " pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.220333 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.248003 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.327248 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.350290 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.361585 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd10ec10_e122_430f_afaf_b0b8222a6b15.slice/crio-ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653 WatchSource:0}: Error finding container ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653: Status 404 returned error can't find the container with id ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.397565 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" event={"ID":"dd10ec10-e122-430f-afaf-b0b8222a6b15","Type":"ContainerStarted","Data":"ebd2074637414fde3c0ad09ee0c5131f6655ffb4052f49edcf77af5bfc0bf653"} Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.469231 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.478764 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb0e04c_e741_4dbe_8c09_94379b736809.slice/crio-5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4 WatchSource:0}: Error finding container 5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4: Status 404 returned error can't find the container with id 5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.497599 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.509660 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e2f02fa_7b78_49ef_8c1a_f9cf7387e063.slice/crio-138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3 WatchSource:0}: Error finding container 138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3: Status 404 returned error can't find the container with id 138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.588816 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.594587 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1223933_4ce9_41dd_9c8a_14a59b540e20.slice/crio-cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb WatchSource:0}: Error finding container cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb: Status 404 returned error can't find the container with id cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.643577 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-84f4bcb569-mzgch"] Feb 16 15:05:23 crc kubenswrapper[4705]: W0216 15:05:23.649016 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85ad7e0_59d0_412d_96e1_298020ef9927.slice/crio-ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066 WatchSource:0}: Error finding container ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066: Status 404 returned error can't find the container with id ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066 Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.666795 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.667905 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.673797 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.680082 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.682564 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.745262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.751028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.754452 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.754717 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.759499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.769447 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860873 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.860980 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861055 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861085 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861119 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.861548 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.862687 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.868771 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.868828 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8f3f970af946958ef14fb10954f50fbe9bc4c87a801d5543e394c89c77251a3/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.873585 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.873995 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.876894 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.913681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-101ebabd-da74-4b9e-89b2-949f688a2852\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-101ebabd-da74-4b9e-89b2-949f688a2852\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962709 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.962962 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963196 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963278 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963305 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963591 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963761 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.963807 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-config\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.965702 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969338 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.969754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.970048 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.970079 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1fc7625b610fde2ebde857343bbc163e776be4c7204cb9706d02837e83df33a1/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.989260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwzt\" (UniqueName: \"kubernetes.io/projected/5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf-kube-api-access-krwzt\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:23 crc kubenswrapper[4705]: I0216 15:05:23.996025 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1deba5f8-a176-451a-a911-46202ad4f272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1deba5f8-a176-451a-a911-46202ad4f272\") pod \"logging-loki-ingester-0\" (UID: \"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf\") " pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067782 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067812 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067907 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067961 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.067999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068026 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.068188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.072058 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-config\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073165 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.073522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.074560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.075535 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.075577 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/059f9482ade21a6fab869ffa328de857655647028e0e091ba883de990e9a2058/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076238 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076477 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc2d44c44bc077227e9eda49f371df5d5070e788785d311c4369b7064adf81c1/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.076962 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.077036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.078334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.079171 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4cde3c29-9511-489b-9849-468cae07d312-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.082250 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cd14a989-22ac-46cb-9295-a99e2043542b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.087795 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg8mn\" (UniqueName: \"kubernetes.io/projected/cd14a989-22ac-46cb-9295-a99e2043542b-kube-api-access-qg8mn\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.089036 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmd7t\" (UniqueName: \"kubernetes.io/projected/4cde3c29-9511-489b-9849-468cae07d312-kube-api-access-hmd7t\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.105576 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b168f061-d361-40be-9e55-01f5eac92511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b168f061-d361-40be-9e55-01f5eac92511\") pod \"logging-loki-index-gateway-0\" (UID: \"4cde3c29-9511-489b-9849-468cae07d312\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.120853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f92f18a1-f41f-4da7-9509-3177223c614b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f92f18a1-f41f-4da7-9509-3177223c614b\") pod \"logging-loki-compactor-0\" (UID: \"cd14a989-22ac-46cb-9295-a99e2043542b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.182050 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.286249 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.408031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"ac32b35aa6fbfecf2166f6882d699ef30475f8e1c9605a9e2ead5bb34d472066"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.410126 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"cb783c475168c71919554f6a1af3bb455e0c9cf4fc55b60222c77612398f1edb"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.411966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" event={"ID":"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063","Type":"ContainerStarted","Data":"138818e4ab401787b1053e2001d4b4eb9143cad87b3e00fb036a313bbab9cbe3"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.413065 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.416685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" event={"ID":"feb0e04c-e741-4dbe-8c09-94379b736809","Type":"ContainerStarted","Data":"5092ad0012abbf10871873507f2f1d24e50dbd8a6214907e06b18b395964e0f4"} Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.474039 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.645628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: I0216 15:05:24.781326 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 16 15:05:24 crc kubenswrapper[4705]: W0216 15:05:24.788725 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a1922a4_a6c5_4187_bcd3_f0e05f3e4fcf.slice/crio-dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09 WatchSource:0}: Error finding container dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09: Status 404 returned error can't find the container with id dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09 Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.430324 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4cde3c29-9511-489b-9849-468cae07d312","Type":"ContainerStarted","Data":"bb3a76b2644e8f453b7e65d0c2d2642f518adf93a2cf1d2006bbcb5508c311db"} Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.433816 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cd14a989-22ac-46cb-9295-a99e2043542b","Type":"ContainerStarted","Data":"bd17767eb420cb50a7476afdf3953375c02baa3f84459d005d6c6c70fe4c62f4"} Feb 16 15:05:25 crc kubenswrapper[4705]: I0216 15:05:25.435980 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf","Type":"ContainerStarted","Data":"dbed47dd64732097873df5e33e1b0a9ecb6839a301173e13d5028ba062651a09"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.478337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"2cb9e33a6b308fe859d480ca4f85b284a9ec0d1dc5815b682ebc4fa41358c9de"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.480949 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" event={"ID":"8e2f02fa-7b78-49ef-8c1a-f9cf7387e063","Type":"ContainerStarted","Data":"25fe34bc2dee89b56b8d1066434a686c8212d3548445b5b18842e2f636bed49e"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.481102 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.483843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf","Type":"ContainerStarted","Data":"e36adcc1845222ea6aabc0798a461dfac9fbf69bed9f414f2135f9de9465bd81"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.484072 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.486727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" event={"ID":"feb0e04c-e741-4dbe-8c09-94379b736809","Type":"ContainerStarted","Data":"0898983eaade81e3c16cf2dae23355d2b43d67c7e538771db63091f6b8a2b4ff"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.486835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.489659 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" event={"ID":"dd10ec10-e122-430f-afaf-b0b8222a6b15","Type":"ContainerStarted","Data":"685a660f6ce2c485929ed6aab815066ba00e70fbe828e19dfcbb2b7db3c335a4"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.489842 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.492665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4cde3c29-9511-489b-9849-468cae07d312","Type":"ContainerStarted","Data":"5293a3d9145ebec32fc251c6042e31ae49e08aba9738d0ff05c45795e9a16324"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.492918 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.522031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cd14a989-22ac-46cb-9295-a99e2043542b","Type":"ContainerStarted","Data":"170f3d9bf403028a4c045add8161ddaac6745f8ea7595405bc39129af89c463d"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.522642 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.535166 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"8f97c6db444154127ed16344b1036c667cea185ec37143df51a58b08d6c19332"} Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.546655 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" podStartSLOduration=2.794890918 podStartE2EDuration="6.546622806s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.513388814 +0000 UTC m=+717.698365890" lastFinishedPulling="2026-02-16 15:05:27.265120702 +0000 UTC m=+721.450097778" observedRunningTime="2026-02-16 15:05:28.526307029 +0000 UTC m=+722.711284175" watchObservedRunningTime="2026-02-16 15:05:28.546622806 +0000 UTC m=+722.731599892" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.571216 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=4.040354009 podStartE2EDuration="6.571188143s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.793149798 +0000 UTC m=+718.978126864" lastFinishedPulling="2026-02-16 15:05:27.323983882 +0000 UTC m=+721.508960998" observedRunningTime="2026-02-16 15:05:28.555300032 +0000 UTC m=+722.740277148" watchObservedRunningTime="2026-02-16 15:05:28.571188143 +0000 UTC m=+722.756165229" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.596458 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" podStartSLOduration=2.745114645 podStartE2EDuration="6.596425539s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.511238873 +0000 UTC m=+717.696215949" lastFinishedPulling="2026-02-16 15:05:27.362549757 +0000 UTC m=+721.547526843" observedRunningTime="2026-02-16 15:05:28.580770875 +0000 UTC m=+722.765747961" watchObservedRunningTime="2026-02-16 15:05:28.596425539 +0000 UTC m=+722.781402625" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.621835 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" podStartSLOduration=2.625216843 podStartE2EDuration="6.621796019s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.363660305 +0000 UTC m=+717.548637381" lastFinishedPulling="2026-02-16 15:05:27.360239461 +0000 UTC m=+721.545216557" observedRunningTime="2026-02-16 15:05:28.616246472 +0000 UTC m=+722.801223548" watchObservedRunningTime="2026-02-16 15:05:28.621796019 +0000 UTC m=+722.806773105" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.638465 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.811647099 podStartE2EDuration="6.638444881s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.497243312 +0000 UTC m=+718.682220388" lastFinishedPulling="2026-02-16 15:05:27.324041054 +0000 UTC m=+721.509018170" observedRunningTime="2026-02-16 15:05:28.636707972 +0000 UTC m=+722.821685068" watchObservedRunningTime="2026-02-16 15:05:28.638444881 +0000 UTC m=+722.823421967" Feb 16 15:05:28 crc kubenswrapper[4705]: I0216 15:05:28.662777 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.967139761 podStartE2EDuration="6.662747231s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:24.666174725 +0000 UTC m=+718.851151801" lastFinishedPulling="2026-02-16 15:05:27.361782175 +0000 UTC m=+721.546759271" observedRunningTime="2026-02-16 15:05:28.65776719 +0000 UTC m=+722.842744266" watchObservedRunningTime="2026-02-16 15:05:28.662747231 +0000 UTC m=+722.847724317" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" event={"ID":"a85ad7e0-59d0-412d-96e1-298020ef9927","Type":"ContainerStarted","Data":"296f13b1c0b8a02f4c5d212dba05858a4c73d4651a8c9a90edaf9faaf60cb797"} Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.567940 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.573839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" event={"ID":"d1223933-4ce9-41dd-9c8a-14a59b540e20","Type":"ContainerStarted","Data":"731fd9dff81616c1c76942fc4486fb7cfc52e188ed984242d050050f730d7cc6"} Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.577656 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.577899 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.585321 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.588302 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.592517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.602573 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-84f4bcb569-mzgch" podStartSLOduration=2.601729426 podStartE2EDuration="8.602537684s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.65494188 +0000 UTC m=+717.839918946" lastFinishedPulling="2026-02-16 15:05:29.655750128 +0000 UTC m=+723.840727204" observedRunningTime="2026-02-16 15:05:30.597739328 +0000 UTC m=+724.782716434" watchObservedRunningTime="2026-02-16 15:05:30.602537684 +0000 UTC m=+724.787514810" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.609274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" Feb 16 15:05:30 crc kubenswrapper[4705]: I0216 15:05:30.664394 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-84f4bcb569-zxt7t" podStartSLOduration=2.613701886 podStartE2EDuration="8.664345867s" podCreationTimestamp="2026-02-16 15:05:22 +0000 UTC" firstStartedPulling="2026-02-16 15:05:23.601182945 +0000 UTC m=+717.786160021" lastFinishedPulling="2026-02-16 15:05:29.651826926 +0000 UTC m=+723.836804002" observedRunningTime="2026-02-16 15:05:30.659282724 +0000 UTC m=+724.844259800" watchObservedRunningTime="2026-02-16 15:05:30.664345867 +0000 UTC m=+724.849322943" Feb 16 15:05:31 crc kubenswrapper[4705]: I0216 15:05:31.684954 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:05:31 crc kubenswrapper[4705]: I0216 15:05:31.685536 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:05:42 crc kubenswrapper[4705]: I0216 15:05:42.859811 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-s8kg2" Feb 16 15:05:43 crc kubenswrapper[4705]: I0216 15:05:43.004313 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-rbcrd" Feb 16 15:05:43 crc kubenswrapper[4705]: I0216 15:05:43.230708 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-mbwk8" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.190154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.294167 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.294244 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:05:44 crc kubenswrapper[4705]: I0216 15:05:44.428734 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 16 15:05:54 crc kubenswrapper[4705]: I0216 15:05:54.292735 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 16 15:05:54 crc kubenswrapper[4705]: I0216 15:05:54.293537 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:00 crc kubenswrapper[4705]: I0216 15:06:00.330436 4705 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684167 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684316 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.684425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.685260 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.685363 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" gracePeriod=600 Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896230 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" exitCode=0 Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948"} Feb 16 15:06:01 crc kubenswrapper[4705]: I0216 15:06:01.896764 4705 scope.go:117] "RemoveContainer" containerID="8ed511d58ebaa68773f182923341f6793c7c9792bc8c0ee7250b0f3212fee0a6" Feb 16 15:06:02 crc kubenswrapper[4705]: I0216 15:06:02.907081 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} Feb 16 15:06:04 crc kubenswrapper[4705]: I0216 15:06:04.292859 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 15:06:04 crc kubenswrapper[4705]: I0216 15:06:04.294560 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:14 crc kubenswrapper[4705]: I0216 15:06:14.293447 4705 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 16 15:06:14 crc kubenswrapper[4705]: I0216 15:06:14.294743 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:06:24 crc kubenswrapper[4705]: I0216 15:06:24.292309 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.928447 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.930762 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.934237 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942533 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942638 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lf4hf" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.942841 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.943353 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.948580 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.965802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.965903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966281 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966680 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.966895 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:41 crc kubenswrapper[4705]: I0216 15:06:41.971650 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.021250 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.022041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-wjnzr metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-rgfsg" podUID="d8d377fe-28fb-4403-97b4-c34aae8f2c09" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.068785 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069018 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069562 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069618 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069760 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.069786 4705 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.069921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: E0216 15:06:42.069953 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics podName:d8d377fe-28fb-4403-97b4-c34aae8f2c09 nodeName:}" failed. No retries permitted until 2026-02-16 15:06:42.569902826 +0000 UTC m=+796.754879912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics") pod "collector-rgfsg" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09") : secret "collector-metrics" not found Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070043 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.070891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.071741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.078238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.078350 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.080785 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.099798 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.100793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.311356 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.328192 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376177 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376306 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir" (OuterVolumeSpecName: "datadir") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.376476 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377119 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config" (OuterVolumeSpecName: "config") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377356 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377568 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377729 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.377823 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378118 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.378828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379174 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379225 4705 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d8d377fe-28fb-4403-97b4-c34aae8f2c09-datadir\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379249 4705 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379274 4705 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.379300 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8d377fe-28fb-4403-97b4-c34aae8f2c09-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.380241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp" (OuterVolumeSpecName: "tmp") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.380587 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token" (OuterVolumeSpecName: "sa-token") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.382211 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.382998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr" (OuterVolumeSpecName: "kube-api-access-wjnzr") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "kube-api-access-wjnzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.384605 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token" (OuterVolumeSpecName: "collector-token") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481665 4705 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481712 4705 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-collector-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481729 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjnzr\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-kube-api-access-wjnzr\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481748 4705 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d8d377fe-28fb-4403-97b4-c34aae8f2c09-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.481762 4705 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d8d377fe-28fb-4403-97b4-c34aae8f2c09-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.583640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.587285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"collector-rgfsg\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " pod="openshift-logging/collector-rgfsg" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.685623 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") pod \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\" (UID: \"d8d377fe-28fb-4403-97b4-c34aae8f2c09\") " Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.690926 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics" (OuterVolumeSpecName: "metrics") pod "d8d377fe-28fb-4403-97b4-c34aae8f2c09" (UID: "d8d377fe-28fb-4403-97b4-c34aae8f2c09"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:06:42 crc kubenswrapper[4705]: I0216 15:06:42.789152 4705 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d8d377fe-28fb-4403-97b4-c34aae8f2c09-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.323143 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rgfsg" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.399073 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.408351 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rgfsg"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.416078 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.417583 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.422460 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lf4hf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.424227 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.424668 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.425711 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.425867 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.426532 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.436590 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.505963 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506039 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506219 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506535 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506587 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506651 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.506798 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609582 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609707 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609739 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609826 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-datadir\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.609832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610427 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610458 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610494 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.610713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config-openshift-service-cacrt\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-entrypoint\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-trusted-ca\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.611868 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-config\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.615656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-tmp\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.616462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.622082 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-collector-syslog-receiver\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.629042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-metrics\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.632069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-sa-token\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.642829 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d597\" (UniqueName: \"kubernetes.io/projected/48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9-kube-api-access-6d597\") pod \"collector-rv6rf\" (UID: \"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9\") " pod="openshift-logging/collector-rv6rf" Feb 16 15:06:43 crc kubenswrapper[4705]: I0216 15:06:43.742253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rv6rf" Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.039241 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rv6rf"] Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.335290 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rv6rf" event={"ID":"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9","Type":"ContainerStarted","Data":"9a89ecd25a509c86c44efe042de8993269b255cdfdcebd9e9c00fda36d971aee"} Feb 16 15:06:44 crc kubenswrapper[4705]: I0216 15:06:44.435010 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d377fe-28fb-4403-97b4-c34aae8f2c09" path="/var/lib/kubelet/pods/d8d377fe-28fb-4403-97b4-c34aae8f2c09/volumes" Feb 16 15:06:51 crc kubenswrapper[4705]: I0216 15:06:51.398612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rv6rf" event={"ID":"48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9","Type":"ContainerStarted","Data":"d180b29398284a3458b20aa464fcb3e1345b711a067a14d34a87b057e213eee5"} Feb 16 15:06:51 crc kubenswrapper[4705]: I0216 15:06:51.434149 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-rv6rf" podStartSLOduration=1.653876648 podStartE2EDuration="8.434118242s" podCreationTimestamp="2026-02-16 15:06:43 +0000 UTC" firstStartedPulling="2026-02-16 15:06:44.051073884 +0000 UTC m=+798.236050970" lastFinishedPulling="2026-02-16 15:06:50.831315478 +0000 UTC m=+805.016292564" observedRunningTime="2026-02-16 15:06:51.427666069 +0000 UTC m=+805.612643185" watchObservedRunningTime="2026-02-16 15:06:51.434118242 +0000 UTC m=+805.619095348" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.215361 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.217621 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.219446 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.226499 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236134 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.236161 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338298 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.338848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.362961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:23 crc kubenswrapper[4705]: I0216 15:07:23.547487 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.069137 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp"] Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737273 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="1563375574eb2b8b91769c0d8f258af832ff8c1a14bd66b6ed209d680a889ede" exitCode=0 Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"1563375574eb2b8b91769c0d8f258af832ff8c1a14bd66b6ed209d680a889ede"} Feb 16 15:07:24 crc kubenswrapper[4705]: I0216 15:07:24.737833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerStarted","Data":"ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1"} Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.562149 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.563957 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578634 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.578741 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.585583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681126 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.681960 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.682008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.711539 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"redhat-operators-t2m7d\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:25 crc kubenswrapper[4705]: I0216 15:07:25.880152 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.300748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.754510 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" exitCode=0 Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.754852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09"} Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.755010 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerStarted","Data":"e1f2ace940038734299af510330a0ecb19a41c91fefa525c71d6e5edc9c59bea"} Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.757650 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="b8b771a80b5e3cd43e27f54f4c0b684dd43dbf6f9cae0337e32463d6b69962cc" exitCode=0 Feb 16 15:07:26 crc kubenswrapper[4705]: I0216 15:07:26.757712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"b8b771a80b5e3cd43e27f54f4c0b684dd43dbf6f9cae0337e32463d6b69962cc"} Feb 16 15:07:27 crc kubenswrapper[4705]: I0216 15:07:27.770626 4705 generic.go:334] "Generic (PLEG): container finished" podID="50f390f7-dc79-47dd-80e2-436b17df094c" containerID="e93f9485ae8ff2f76d28f1342b41935818058bb14972c7d8a19feeb546abf353" exitCode=0 Feb 16 15:07:27 crc kubenswrapper[4705]: I0216 15:07:27.770703 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"e93f9485ae8ff2f76d28f1342b41935818058bb14972c7d8a19feeb546abf353"} Feb 16 15:07:28 crc kubenswrapper[4705]: I0216 15:07:28.780396 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" exitCode=0 Feb 16 15:07:28 crc kubenswrapper[4705]: I0216 15:07:28.780482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.184185 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195146 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") pod \"50f390f7-dc79-47dd-80e2-436b17df094c\" (UID: \"50f390f7-dc79-47dd-80e2-436b17df094c\") " Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.195803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle" (OuterVolumeSpecName: "bundle") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.196040 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.212613 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf" (OuterVolumeSpecName: "kube-api-access-hkkdf") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "kube-api-access-hkkdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.223263 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util" (OuterVolumeSpecName: "util") pod "50f390f7-dc79-47dd-80e2-436b17df094c" (UID: "50f390f7-dc79-47dd-80e2-436b17df094c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.297823 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkkdf\" (UniqueName: \"kubernetes.io/projected/50f390f7-dc79-47dd-80e2-436b17df094c-kube-api-access-hkkdf\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.297859 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50f390f7-dc79-47dd-80e2-436b17df094c-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" event={"ID":"50f390f7-dc79-47dd-80e2-436b17df094c","Type":"ContainerDied","Data":"ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794567 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce474200642ecc06a922478d2aae2eb9d6bc6e32f0ba75f63fbb103dabe77bb1" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.794055 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp" Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.798161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerStarted","Data":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} Feb 16 15:07:29 crc kubenswrapper[4705]: I0216 15:07:29.844626 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2m7d" podStartSLOduration=2.367610545 podStartE2EDuration="4.84459558s" podCreationTimestamp="2026-02-16 15:07:25 +0000 UTC" firstStartedPulling="2026-02-16 15:07:26.756612397 +0000 UTC m=+840.941589473" lastFinishedPulling="2026-02-16 15:07:29.233597442 +0000 UTC m=+843.418574508" observedRunningTime="2026-02-16 15:07:29.833436946 +0000 UTC m=+844.018414042" watchObservedRunningTime="2026-02-16 15:07:29.84459558 +0000 UTC m=+844.029572696" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.029040 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.029945 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="util" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.029973 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="util" Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.030015 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030029 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: E0216 15:07:33.030072 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="pull" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030088 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="pull" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.030366 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f390f7-dc79-47dd-80e2-436b17df094c" containerName="extract" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.031513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.040816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.041176 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.041235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4l42x" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.044758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.059315 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.162070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.190404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4r6\" (UniqueName: \"kubernetes.io/projected/b2d83f82-a3e4-4937-8484-5f8174b5d986-kube-api-access-sh4r6\") pod \"nmstate-operator-694c9596b7-h6nzt\" (UID: \"b2d83f82-a3e4-4937-8484-5f8174b5d986\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.396729 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" Feb 16 15:07:33 crc kubenswrapper[4705]: I0216 15:07:33.932646 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h6nzt"] Feb 16 15:07:33 crc kubenswrapper[4705]: W0216 15:07:33.940574 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2d83f82_a3e4_4937_8484_5f8174b5d986.slice/crio-bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40 WatchSource:0}: Error finding container bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40: Status 404 returned error can't find the container with id bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40 Feb 16 15:07:34 crc kubenswrapper[4705]: I0216 15:07:34.841904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" event={"ID":"b2d83f82-a3e4-4937-8484-5f8174b5d986","Type":"ContainerStarted","Data":"bdb9a64c520c82dab82cfd13ec3621bed27a35681095701409af2112d6765b40"} Feb 16 15:07:35 crc kubenswrapper[4705]: I0216 15:07:35.880717 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:35 crc kubenswrapper[4705]: I0216 15:07:35.880804 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.860361 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" event={"ID":"b2d83f82-a3e4-4937-8484-5f8174b5d986","Type":"ContainerStarted","Data":"e751a2f010613d7e9387c73d0de7f4ffb7383aa7b995a971d3716eaf7056bbc0"} Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.885244 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-h6nzt" podStartSLOduration=1.623486281 podStartE2EDuration="3.885215488s" podCreationTimestamp="2026-02-16 15:07:33 +0000 UTC" firstStartedPulling="2026-02-16 15:07:33.943596832 +0000 UTC m=+848.128573908" lastFinishedPulling="2026-02-16 15:07:36.205326039 +0000 UTC m=+850.390303115" observedRunningTime="2026-02-16 15:07:36.880982329 +0000 UTC m=+851.065959445" watchObservedRunningTime="2026-02-16 15:07:36.885215488 +0000 UTC m=+851.070192574" Feb 16 15:07:36 crc kubenswrapper[4705]: I0216 15:07:36.951101 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2m7d" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" probeResult="failure" output=< Feb 16 15:07:36 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:07:36 crc kubenswrapper[4705]: > Feb 16 15:07:37 crc kubenswrapper[4705]: I0216 15:07:37.996247 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:37 crc kubenswrapper[4705]: I0216 15:07:37.998720 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.004125 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9w6g2" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.021584 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.022717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.024036 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.030885 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.044036 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.059557 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wr89v"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.060896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.082901 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.082959 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083257 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083727 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.083832 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186123 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186262 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186418 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-ovs-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186649 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: E0216 15:07:38.186908 4705 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186945 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: E0216 15:07:38.187130 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair podName:7a87077c-c5fa-4c92-9c08-44dcf11d38c7 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:38.687050634 +0000 UTC m=+852.872027710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair") pod "nmstate-webhook-866bcb46dc-9kf74" (UID: "7a87077c-c5fa-4c92-9c08-44dcf11d38c7") : secret "openshift-nmstate-webhook" not found Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.186881 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-dbus-socket\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.187333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.187333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ffb9d03-b8ea-44ff-9397-58b55c367d89-nmstate-lock\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.207262 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncztk\" (UniqueName: \"kubernetes.io/projected/9ffb9d03-b8ea-44ff-9397-58b55c367d89-kube-api-access-ncztk\") pod \"nmstate-handler-wr89v\" (UID: \"9ffb9d03-b8ea-44ff-9397-58b55c367d89\") " pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.208162 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24dz\" (UniqueName: \"kubernetes.io/projected/ed67458f-1875-405e-85a5-2a4f7d54089b-kube-api-access-g24dz\") pod \"nmstate-metrics-58c85c668d-tnbq4\" (UID: \"ed67458f-1875-405e-85a5-2a4f7d54089b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.233171 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.234550 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.239993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc2dl\" (UniqueName: \"kubernetes.io/projected/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-kube-api-access-vc2dl\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.245595 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-n6nx6" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.245911 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.246032 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.261065 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.289631 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.289986 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.290094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.313825 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391845 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.391989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.393420 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/303c8298-3e10-49e8-96b1-ed1dafcd23e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.395793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/303c8298-3e10-49e8-96b1-ed1dafcd23e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.400592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.423401 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpkn\" (UniqueName: \"kubernetes.io/projected/303c8298-3e10-49e8-96b1-ed1dafcd23e3-kube-api-access-9kpkn\") pod \"nmstate-console-plugin-5c78fc5d65-hl5c9\" (UID: \"303c8298-3e10-49e8-96b1-ed1dafcd23e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.518473 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.519691 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.548707 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.581995 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610497 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.610691 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.700280 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4"] Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714382 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714442 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714486 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714514 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714638 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.714690 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.715663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.716320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.716968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.717490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.722024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.725101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.733232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7a87077c-c5fa-4c92-9c08-44dcf11d38c7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-9kf74\" (UID: \"7a87077c-c5fa-4c92-9c08-44dcf11d38c7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.737764 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"console-5cb874789d-44cjq\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.871492 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.886705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"6f7a728c5618f61e0146cac924dd9fe784b169741f507663012bea8f022dd605"} Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.887432 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wr89v" event={"ID":"9ffb9d03-b8ea-44ff-9397-58b55c367d89","Type":"ContainerStarted","Data":"599d529d283ad9de645b516a35c0da9fc33387a655b9f75358257141a4589cc7"} Feb 16 15:07:38 crc kubenswrapper[4705]: I0216 15:07:38.987948 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.045752 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9"] Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.151238 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:07:39 crc kubenswrapper[4705]: W0216 15:07:39.156745 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ab25c9f_91f2_46f2_8abf_5004d8c114ad.slice/crio-2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c WatchSource:0}: Error finding container 2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c: Status 404 returned error can't find the container with id 2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.455046 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74"] Feb 16 15:07:39 crc kubenswrapper[4705]: W0216 15:07:39.468573 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a87077c_c5fa_4c92_9c08_44dcf11d38c7.slice/crio-697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860 WatchSource:0}: Error finding container 697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860: Status 404 returned error can't find the container with id 697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860 Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.899529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" event={"ID":"303c8298-3e10-49e8-96b1-ed1dafcd23e3","Type":"ContainerStarted","Data":"68ab50ae95fb3d694a0e114b2affc564cc66fb9050b1a9369d0aed0c4ae98248"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.901575 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerStarted","Data":"b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.901626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerStarted","Data":"2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.902877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" event={"ID":"7a87077c-c5fa-4c92-9c08-44dcf11d38c7","Type":"ContainerStarted","Data":"697bb26475ee278ee9b2f910c6f4a90466ced569567160a2d62ee2ae6af7c860"} Feb 16 15:07:39 crc kubenswrapper[4705]: I0216 15:07:39.935347 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cb874789d-44cjq" podStartSLOduration=1.935322312 podStartE2EDuration="1.935322312s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:07:39.920845664 +0000 UTC m=+854.105822740" watchObservedRunningTime="2026-02-16 15:07:39.935322312 +0000 UTC m=+854.120299378" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.930217 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" event={"ID":"303c8298-3e10-49e8-96b1-ed1dafcd23e3","Type":"ContainerStarted","Data":"4335750875ffcfddd6b580d5c1b6a01cf4c9c2647d4dca7e11785b83b74789dd"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.933799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"87c0a4a38a3527738c8fc86bfbb9bd1497ea4e303ff49ae76c89ec3e2ed5179c"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.934805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wr89v" event={"ID":"9ffb9d03-b8ea-44ff-9397-58b55c367d89","Type":"ContainerStarted","Data":"bd4e22d200cae623261a9cf00dc7ed365e2e8924f3e3dc3230d4d52b9e3991f7"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.935457 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.936775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" event={"ID":"7a87077c-c5fa-4c92-9c08-44dcf11d38c7","Type":"ContainerStarted","Data":"e2ce8baf20d28c1cd4837600472d5359888ee260750d0dc7cf0c939f9ed62077"} Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.937197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.948732 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hl5c9" podStartSLOduration=1.485368711 podStartE2EDuration="4.94870421s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="2026-02-16 15:07:39.078173176 +0000 UTC m=+853.263150292" lastFinishedPulling="2026-02-16 15:07:42.541508715 +0000 UTC m=+856.726485791" observedRunningTime="2026-02-16 15:07:42.945285113 +0000 UTC m=+857.130262199" watchObservedRunningTime="2026-02-16 15:07:42.94870421 +0000 UTC m=+857.133681286" Feb 16 15:07:42 crc kubenswrapper[4705]: I0216 15:07:42.996447 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wr89v" podStartSLOduration=0.875902487 podStartE2EDuration="4.996423615s" podCreationTimestamp="2026-02-16 15:07:38 +0000 UTC" firstStartedPulling="2026-02-16 15:07:38.441185576 +0000 UTC m=+852.626162642" lastFinishedPulling="2026-02-16 15:07:42.561706654 +0000 UTC m=+856.746683770" observedRunningTime="2026-02-16 15:07:42.993086301 +0000 UTC m=+857.178063387" watchObservedRunningTime="2026-02-16 15:07:42.996423615 +0000 UTC m=+857.181400691" Feb 16 15:07:43 crc kubenswrapper[4705]: I0216 15:07:43.020644 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" podStartSLOduration=2.937500152 podStartE2EDuration="6.020620416s" podCreationTimestamp="2026-02-16 15:07:37 +0000 UTC" firstStartedPulling="2026-02-16 15:07:39.471310185 +0000 UTC m=+853.656287261" lastFinishedPulling="2026-02-16 15:07:42.554430449 +0000 UTC m=+856.739407525" observedRunningTime="2026-02-16 15:07:43.018892928 +0000 UTC m=+857.203870034" watchObservedRunningTime="2026-02-16 15:07:43.020620416 +0000 UTC m=+857.205597492" Feb 16 15:07:45 crc kubenswrapper[4705]: I0216 15:07:45.946317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:45 crc kubenswrapper[4705]: I0216 15:07:45.975467 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" event={"ID":"ed67458f-1875-405e-85a5-2a4f7d54089b","Type":"ContainerStarted","Data":"4c55f81ad54ee5ca366362a72b88322d21a6e26c67aba10f5f500392f60a07a4"} Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.012732 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tnbq4" podStartSLOduration=2.482258383 podStartE2EDuration="9.012688424s" podCreationTimestamp="2026-02-16 15:07:37 +0000 UTC" firstStartedPulling="2026-02-16 15:07:38.704508616 +0000 UTC m=+852.889485692" lastFinishedPulling="2026-02-16 15:07:45.234938657 +0000 UTC m=+859.419915733" observedRunningTime="2026-02-16 15:07:46.006824779 +0000 UTC m=+860.191801865" watchObservedRunningTime="2026-02-16 15:07:46.012688424 +0000 UTC m=+860.197665550" Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.035446 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.202694 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:46 crc kubenswrapper[4705]: I0216 15:07:46.984749 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2m7d" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" containerID="cri-o://ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" gracePeriod=2 Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.462773 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623082 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.623459 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") pod \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\" (UID: \"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98\") " Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.624297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities" (OuterVolumeSpecName: "utilities") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.630533 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn" (OuterVolumeSpecName: "kube-api-access-5jcwn") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "kube-api-access-5jcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.726708 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jcwn\" (UniqueName: \"kubernetes.io/projected/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-kube-api-access-5jcwn\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.726756 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.785900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" (UID: "05e26c7e-0ce8-4f9a-9b45-a49960dc4f98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.828783 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998478 4705 generic.go:334] "Generic (PLEG): container finished" podID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" exitCode=0 Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998545 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2m7d" Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998547 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998658 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2m7d" event={"ID":"05e26c7e-0ce8-4f9a-9b45-a49960dc4f98","Type":"ContainerDied","Data":"e1f2ace940038734299af510330a0ecb19a41c91fefa525c71d6e5edc9c59bea"} Feb 16 15:07:47 crc kubenswrapper[4705]: I0216 15:07:47.998702 4705 scope.go:117] "RemoveContainer" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.038179 4705 scope.go:117] "RemoveContainer" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.054458 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.068013 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2m7d"] Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.072691 4705 scope.go:117] "RemoveContainer" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.112511 4705 scope.go:117] "RemoveContainer" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.113302 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": container with ID starting with ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8 not found: ID does not exist" containerID="ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.113361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8"} err="failed to get container status \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": rpc error: code = NotFound desc = could not find container \"ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8\": container with ID starting with ee6d6b77e375f30dc026a3a8b7d45e1d713f54f327370b8c1e56936e4ee471e8 not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.113438 4705 scope.go:117] "RemoveContainer" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.114927 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": container with ID starting with b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f not found: ID does not exist" containerID="b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.114959 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f"} err="failed to get container status \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": rpc error: code = NotFound desc = could not find container \"b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f\": container with ID starting with b2ea8f125b7d766febffcdbb0b8accb8c583f046dbab31db154177ebbc22766f not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.114977 4705 scope.go:117] "RemoveContainer" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: E0216 15:07:48.115413 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": container with ID starting with 6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09 not found: ID does not exist" containerID="6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.115476 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09"} err="failed to get container status \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": rpc error: code = NotFound desc = could not find container \"6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09\": container with ID starting with 6a990e77ab8d09966453755ad56bdb0d6148281569c73d26ec849c4ea7c37d09 not found: ID does not exist" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.435380 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" path="/var/lib/kubelet/pods/05e26c7e-0ce8-4f9a-9b45-a49960dc4f98/volumes" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.436718 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wr89v" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.872418 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.873144 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:48 crc kubenswrapper[4705]: I0216 15:07:48.878098 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:49 crc kubenswrapper[4705]: I0216 15:07:49.017556 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:07:49 crc kubenswrapper[4705]: I0216 15:07:49.142848 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:07:58 crc kubenswrapper[4705]: I0216 15:07:58.995962 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-9kf74" Feb 16 15:08:01 crc kubenswrapper[4705]: I0216 15:08:01.684191 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:08:01 crc kubenswrapper[4705]: I0216 15:08:01.685033 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.220912 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bb776c56c-pzs4q" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" containerID="cri-o://ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" gracePeriod=15 Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.649933 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb776c56c-pzs4q_80172f35-e30c-409c-b28e-eb65d41dd384/console/0.log" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.650271 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822355 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822422 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822448 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822507 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822572 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822604 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.822701 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") pod \"80172f35-e30c-409c-b28e-eb65d41dd384\" (UID: \"80172f35-e30c-409c-b28e-eb65d41dd384\") " Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823458 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823466 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config" (OuterVolumeSpecName: "console-config") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.823697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca" (OuterVolumeSpecName: "service-ca") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.828569 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.830184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.830245 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc" (OuterVolumeSpecName: "kube-api-access-4cxdc") pod "80172f35-e30c-409c-b28e-eb65d41dd384" (UID: "80172f35-e30c-409c-b28e-eb65d41dd384"). InnerVolumeSpecName "kube-api-access-4cxdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924853 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924907 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cxdc\" (UniqueName: \"kubernetes.io/projected/80172f35-e30c-409c-b28e-eb65d41dd384-kube-api-access-4cxdc\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924920 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924930 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924941 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924950 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/80172f35-e30c-409c-b28e-eb65d41dd384-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:14 crc kubenswrapper[4705]: I0216 15:08:14.924960 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/80172f35-e30c-409c-b28e-eb65d41dd384-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.277727 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb776c56c-pzs4q_80172f35-e30c-409c-b28e-eb65d41dd384/console/0.log" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278118 4705 generic.go:334] "Generic (PLEG): container finished" podID="80172f35-e30c-409c-b28e-eb65d41dd384" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" exitCode=2 Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerDied","Data":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb776c56c-pzs4q" event={"ID":"80172f35-e30c-409c-b28e-eb65d41dd384","Type":"ContainerDied","Data":"62764daed3103786ebb88f7fa6ff0d0d41c134f9dfddbfa2f020958e2f20e60b"} Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278218 4705 scope.go:117] "RemoveContainer" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.278244 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb776c56c-pzs4q" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.304136 4705 scope.go:117] "RemoveContainer" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: E0216 15:08:15.305204 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": container with ID starting with ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8 not found: ID does not exist" containerID="ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.305270 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8"} err="failed to get container status \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": rpc error: code = NotFound desc = could not find container \"ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8\": container with ID starting with ab56978f9164bf591070775d3648ddbac8a8d5f1d7becd1548fd9398d0947eb8 not found: ID does not exist" Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.324433 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:08:15 crc kubenswrapper[4705]: I0216 15:08:15.335270 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bb776c56c-pzs4q"] Feb 16 15:08:16 crc kubenswrapper[4705]: I0216 15:08:16.435773 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" path="/var/lib/kubelet/pods/80172f35-e30c-409c-b28e-eb65d41dd384/volumes" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.564904 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566317 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566343 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566362 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566447 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566480 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-content" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566494 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-content" Feb 16 15:08:21 crc kubenswrapper[4705]: E0216 15:08:21.566540 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-utilities" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566557 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="extract-utilities" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566845 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="80172f35-e30c-409c-b28e-eb65d41dd384" containerName="console" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.566910 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e26c7e-0ce8-4f9a-9b45-a49960dc4f98" containerName="registry-server" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.569151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.572297 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.584836 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.662267 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.662953 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.663177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.764643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.765161 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.765168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.791248 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:21 crc kubenswrapper[4705]: I0216 15:08:21.904809 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:22 crc kubenswrapper[4705]: I0216 15:08:22.400161 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7"] Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354183 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="732e7d5edc8cbe6eac47f114645abab9e9e240ce74b091e22a5b56434835e6f8" exitCode=0 Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"732e7d5edc8cbe6eac47f114645abab9e9e240ce74b091e22a5b56434835e6f8"} Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.354560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerStarted","Data":"6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329"} Feb 16 15:08:23 crc kubenswrapper[4705]: I0216 15:08:23.361785 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:08:25 crc kubenswrapper[4705]: I0216 15:08:25.377153 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="edcb1e5aa6fa94e358515db7cb9dbc37da590533be6f6f1573aa9dc92a7e51ea" exitCode=0 Feb 16 15:08:25 crc kubenswrapper[4705]: I0216 15:08:25.377213 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"edcb1e5aa6fa94e358515db7cb9dbc37da590533be6f6f1573aa9dc92a7e51ea"} Feb 16 15:08:26 crc kubenswrapper[4705]: I0216 15:08:26.393480 4705 generic.go:334] "Generic (PLEG): container finished" podID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerID="6076b29932cbc2ee4abdbcae88d98e92e177a415ba13b846a51bb7f7be06afc1" exitCode=0 Feb 16 15:08:26 crc kubenswrapper[4705]: I0216 15:08:26.393564 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"6076b29932cbc2ee4abdbcae88d98e92e177a415ba13b846a51bb7f7be06afc1"} Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.754820 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.896525 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.897048 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.897083 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") pod \"e5b4da77-aea8-42f2-8a75-43943612e0e4\" (UID: \"e5b4da77-aea8-42f2-8a75-43943612e0e4\") " Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.898924 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle" (OuterVolumeSpecName: "bundle") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.904597 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz" (OuterVolumeSpecName: "kube-api-access-g6cwz") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "kube-api-access-g6cwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.910817 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util" (OuterVolumeSpecName: "util") pod "e5b4da77-aea8-42f2-8a75-43943612e0e4" (UID: "e5b4da77-aea8-42f2-8a75-43943612e0e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.999640 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:27 crc kubenswrapper[4705]: I0216 15:08:27.999988 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5b4da77-aea8-42f2-8a75-43943612e0e4-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.000067 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cwz\" (UniqueName: \"kubernetes.io/projected/e5b4da77-aea8-42f2-8a75-43943612e0e4-kube-api-access-g6cwz\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417017 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" event={"ID":"e5b4da77-aea8-42f2-8a75-43943612e0e4","Type":"ContainerDied","Data":"6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329"} Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417466 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c96d74fb5f00ded53661fb66a92dda76adc37399ba1aac37aa1b32f53da2329" Feb 16 15:08:28 crc kubenswrapper[4705]: I0216 15:08:28.417120 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7" Feb 16 15:08:31 crc kubenswrapper[4705]: I0216 15:08:31.684539 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:08:31 crc kubenswrapper[4705]: I0216 15:08:31.685116 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.047816 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048420 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="pull" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048434 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="pull" Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048452 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="util" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="util" Feb 16 15:08:36 crc kubenswrapper[4705]: E0216 15:08:36.048473 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048479 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.048619 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b4da77-aea8-42f2-8a75-43943612e0e4" containerName="extract" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.049210 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.059252 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060008 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fxwcf" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060385 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.060621 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.066822 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.069402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.157761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.158111 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.158269 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.259962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.260041 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.260102 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.277384 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-apiservice-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.277632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55ce7b61-e1e6-483d-a84f-7ea168ef9672-webhook-cert\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.296302 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9j9\" (UniqueName: \"kubernetes.io/projected/55ce7b61-e1e6-483d-a84f-7ea168ef9672-kube-api-access-4k9j9\") pod \"metallb-operator-controller-manager-76745d596b-4dznb\" (UID: \"55ce7b61-e1e6-483d-a84f-7ea168ef9672\") " pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.368805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.496307 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.504622 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.514254 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vp6v6" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.517295 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.517300 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.531102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669519 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669568 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.669592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.771803 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.772249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.772549 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.788392 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-webhook-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.788457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624f7ca8-2011-4ed6-9ee2-24acddf29390-apiservice-cert\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.792528 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw696\" (UniqueName: \"kubernetes.io/projected/624f7ca8-2011-4ed6-9ee2-24acddf29390-kube-api-access-dw696\") pod \"metallb-operator-webhook-server-75967976b4-q84hp\" (UID: \"624f7ca8-2011-4ed6-9ee2-24acddf29390\") " pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:36 crc kubenswrapper[4705]: I0216 15:08:36.823629 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.105872 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76745d596b-4dznb"] Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.350553 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75967976b4-q84hp"] Feb 16 15:08:37 crc kubenswrapper[4705]: W0216 15:08:37.353118 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624f7ca8_2011_4ed6_9ee2_24acddf29390.slice/crio-f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016 WatchSource:0}: Error finding container f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016: Status 404 returned error can't find the container with id f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016 Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.512339 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" event={"ID":"624f7ca8-2011-4ed6-9ee2-24acddf29390","Type":"ContainerStarted","Data":"f30c1626e032d6c4ced0a3126af30babe2e7c7ddd57545e71b4e90a4b07d0016"} Feb 16 15:08:37 crc kubenswrapper[4705]: I0216 15:08:37.514071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" event={"ID":"55ce7b61-e1e6-483d-a84f-7ea168ef9672","Type":"ContainerStarted","Data":"16142500a433faeb385d51a91bf4850751e8a7de8beb1533dead43d43fe04733"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.587492 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" event={"ID":"55ce7b61-e1e6-483d-a84f-7ea168ef9672","Type":"ContainerStarted","Data":"eb67781b4e2d597f45941a4c01c4dc97651e53cdcbc73517154a09a9fb67f78b"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.588128 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.589550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" event={"ID":"624f7ca8-2011-4ed6-9ee2-24acddf29390","Type":"ContainerStarted","Data":"ad89b43d796bb1caa6af788754c64e20a0bf58cb897d1d0dc1437582e86ad286"} Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.615295 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" podStartSLOduration=2.527992671 podStartE2EDuration="7.61527352s" podCreationTimestamp="2026-02-16 15:08:36 +0000 UTC" firstStartedPulling="2026-02-16 15:08:37.162596041 +0000 UTC m=+911.347573117" lastFinishedPulling="2026-02-16 15:08:42.24987688 +0000 UTC m=+916.434853966" observedRunningTime="2026-02-16 15:08:43.610693589 +0000 UTC m=+917.795670685" watchObservedRunningTime="2026-02-16 15:08:43.61527352 +0000 UTC m=+917.800250596" Feb 16 15:08:43 crc kubenswrapper[4705]: I0216 15:08:43.647248 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" podStartSLOduration=2.729781863 podStartE2EDuration="7.64721903s" podCreationTimestamp="2026-02-16 15:08:36 +0000 UTC" firstStartedPulling="2026-02-16 15:08:37.356236782 +0000 UTC m=+911.541213858" lastFinishedPulling="2026-02-16 15:08:42.273673949 +0000 UTC m=+916.458651025" observedRunningTime="2026-02-16 15:08:43.640685804 +0000 UTC m=+917.825662890" watchObservedRunningTime="2026-02-16 15:08:43.64721903 +0000 UTC m=+917.832196116" Feb 16 15:08:44 crc kubenswrapper[4705]: I0216 15:08:44.598262 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:56 crc kubenswrapper[4705]: I0216 15:08:56.828865 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75967976b4-q84hp" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.541846 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.543750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.559002 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704058 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704534 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.704617 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806818 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806914 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.806981 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.807553 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.807645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.841301 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"certified-operators-r9pdg\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:57 crc kubenswrapper[4705]: I0216 15:08:57.874236 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:08:58 crc kubenswrapper[4705]: I0216 15:08:58.541507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:08:58 crc kubenswrapper[4705]: I0216 15:08:58.709387 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"6ec3342b21b192f9b022f679237496c74782a9c984fb5bddd5c4b789c2bdab1f"} Feb 16 15:08:59 crc kubenswrapper[4705]: I0216 15:08:59.721123 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" exitCode=0 Feb 16 15:08:59 crc kubenswrapper[4705]: I0216 15:08:59.721521 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22"} Feb 16 15:09:00 crc kubenswrapper[4705]: I0216 15:09:00.733694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684140 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684244 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.684324 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.685592 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.685735 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" gracePeriod=600 Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.747593 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" exitCode=0 Feb 16 15:09:01 crc kubenswrapper[4705]: I0216 15:09:01.747657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.760437 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" exitCode=0 Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.760494 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.761449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.761475 4705 scope.go:117] "RemoveContainer" containerID="66c40339ff6d451b12f9977b3110b2e136ea4dcbaee6612ad6a69e020c815948" Feb 16 15:09:02 crc kubenswrapper[4705]: I0216 15:09:02.768357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerStarted","Data":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.875418 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.876054 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.919218 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:07 crc kubenswrapper[4705]: I0216 15:09:07.939277 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9pdg" podStartSLOduration=8.355613402 podStartE2EDuration="10.939257306s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="2026-02-16 15:08:59.72352786 +0000 UTC m=+933.908504946" lastFinishedPulling="2026-02-16 15:09:02.307171774 +0000 UTC m=+936.492148850" observedRunningTime="2026-02-16 15:09:02.809096775 +0000 UTC m=+936.994073861" watchObservedRunningTime="2026-02-16 15:09:07.939257306 +0000 UTC m=+942.124234382" Feb 16 15:09:08 crc kubenswrapper[4705]: I0216 15:09:08.878646 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:10 crc kubenswrapper[4705]: I0216 15:09:10.324617 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:10 crc kubenswrapper[4705]: I0216 15:09:10.853259 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9pdg" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" containerID="cri-o://26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" gracePeriod=2 Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.421059 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529242 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.529445 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") pod \"e7fb1d1e-a675-4965-9698-79db7cb89697\" (UID: \"e7fb1d1e-a675-4965-9698-79db7cb89697\") " Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.531539 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities" (OuterVolumeSpecName: "utilities") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.539115 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl" (OuterVolumeSpecName: "kube-api-access-6z5xl") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "kube-api-access-6z5xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.581437 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7fb1d1e-a675-4965-9698-79db7cb89697" (UID: "e7fb1d1e-a675-4965-9698-79db7cb89697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632534 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632609 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5xl\" (UniqueName: \"kubernetes.io/projected/e7fb1d1e-a675-4965-9698-79db7cb89697-kube-api-access-6z5xl\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.632640 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fb1d1e-a675-4965-9698-79db7cb89697-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867459 4705 generic.go:334] "Generic (PLEG): container finished" podID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" exitCode=0 Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867537 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867622 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9pdg" event={"ID":"e7fb1d1e-a675-4965-9698-79db7cb89697","Type":"ContainerDied","Data":"6ec3342b21b192f9b022f679237496c74782a9c984fb5bddd5c4b789c2bdab1f"} Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867643 4705 scope.go:117] "RemoveContainer" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.867637 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9pdg" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.887315 4705 scope.go:117] "RemoveContainer" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.908142 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.915687 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9pdg"] Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.920240 4705 scope.go:117] "RemoveContainer" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.940883 4705 scope.go:117] "RemoveContainer" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.944267 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": container with ID starting with 26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd not found: ID does not exist" containerID="26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.944558 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd"} err="failed to get container status \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": rpc error: code = NotFound desc = could not find container \"26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd\": container with ID starting with 26a490f7aff4cc0c0843b0c59f500a9589effe90f1a52a6e206206e867a3f7bd not found: ID does not exist" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.944729 4705 scope.go:117] "RemoveContainer" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.945309 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": container with ID starting with f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc not found: ID does not exist" containerID="f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945391 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc"} err="failed to get container status \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": rpc error: code = NotFound desc = could not find container \"f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc\": container with ID starting with f7a80e5f30c1d39a4eeb987fc43de0e1bcf2c54ffd1f2f402863593eb9462dbc not found: ID does not exist" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945436 4705 scope.go:117] "RemoveContainer" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: E0216 15:09:11.945796 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": container with ID starting with 48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22 not found: ID does not exist" containerID="48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22" Feb 16 15:09:11 crc kubenswrapper[4705]: I0216 15:09:11.945839 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22"} err="failed to get container status \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": rpc error: code = NotFound desc = could not find container \"48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22\": container with ID starting with 48dd4e04b8e745b7739492ab584a2b0fd49302bbc57c91e973d4cccf5b045a22 not found: ID does not exist" Feb 16 15:09:12 crc kubenswrapper[4705]: I0216 15:09:12.437112 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" path="/var/lib/kubelet/pods/e7fb1d1e-a675-4965-9698-79db7cb89697/volumes" Feb 16 15:09:16 crc kubenswrapper[4705]: I0216 15:09:16.371613 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76745d596b-4dznb" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.237409 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238340 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-content" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238446 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-content" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238575 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-utilities" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238656 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="extract-utilities" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.238735 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.238805 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.239102 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fb1d1e-a675-4965-9698-79db7cb89697" containerName="registry-server" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.239996 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.245722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.246447 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-84lgn" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.255877 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5znjj"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.259986 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.262080 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.262105 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.263490 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.338896 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nbgmf"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.342538 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.344941 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345231 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xcfw5" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345382 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.345560 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356152 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356357 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356485 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.356678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.366420 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.368092 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.370325 4705 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.407719 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.458721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459047 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459557 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459633 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459792 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460106 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460271 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460343 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460446 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.459744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-sockets\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.460848 4705 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.460939 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs podName:06291746-6582-464c-9dff-b4b98a359885 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:17.96092443 +0000 UTC m=+952.145901506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs") pod "frr-k8s-5znjj" (UID: "06291746-6582-464c-9dff-b4b98a359885") : secret "frr-k8s-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.460973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/06291746-6582-464c-9dff-b4b98a359885-frr-startup\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-reloader\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461281 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-frr-conf\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.461291 4705 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.461468 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert podName:751baaae-9090-48b1-9bae-79b7527d6c02 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:17.961458675 +0000 UTC m=+952.146435741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert") pod "frr-k8s-webhook-server-78b44bf5bb-x4255" (UID: "751baaae-9090-48b1-9bae-79b7527d6c02") : secret "frr-k8s-webhook-server-cert" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.461493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/06291746-6582-464c-9dff-b4b98a359885-metrics\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.480225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsgwb\" (UniqueName: \"kubernetes.io/projected/751baaae-9090-48b1-9bae-79b7527d6c02-kube-api-access-qsgwb\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.500416 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s78qq\" (UniqueName: \"kubernetes.io/projected/06291746-6582-464c-9dff-b4b98a359885-kube-api-access-s78qq\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562652 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562721 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562757 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.562837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563232 4705 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563355 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs podName:493ad03c-5e3e-4726-9764-272f39f5aa37 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:18.06333752 +0000 UTC m=+952.248314596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs") pod "controller-69bbfbf88f-5p2db" (UID: "493ad03c-5e3e-4726-9764-272f39f5aa37") : secret "controller-certs-secret" not found Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563257 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.563351 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2536f291-dea1-4673-acf7-9beaffa87817-metallb-excludel2\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: E0216 15:09:17.563521 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist podName:2536f291-dea1-4673-acf7-9beaffa87817 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:18.063512585 +0000 UTC m=+952.248489661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist") pod "speaker-nbgmf" (UID: "2536f291-dea1-4673-acf7-9beaffa87817") : secret "metallb-memberlist" not found Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.567816 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-metrics-certs\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.586024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ml4z\" (UniqueName: \"kubernetes.io/projected/2536f291-dea1-4673-acf7-9beaffa87817-kube-api-access-6ml4z\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.588883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-cert\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.601118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmlf\" (UniqueName: \"kubernetes.io/projected/493ad03c-5e3e-4726-9764-272f39f5aa37-kube-api-access-8wmlf\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.970202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.970640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.974514 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/751baaae-9090-48b1-9bae-79b7527d6c02-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-x4255\" (UID: \"751baaae-9090-48b1-9bae-79b7527d6c02\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:17 crc kubenswrapper[4705]: I0216 15:09:17.974903 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06291746-6582-464c-9dff-b4b98a359885-metrics-certs\") pod \"frr-k8s-5znjj\" (UID: \"06291746-6582-464c-9dff-b4b98a359885\") " pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.072084 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:18 crc kubenswrapper[4705]: E0216 15:09:18.072522 4705 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:09:18 crc kubenswrapper[4705]: E0216 15:09:18.072668 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist podName:2536f291-dea1-4673-acf7-9beaffa87817 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:19.072636432 +0000 UTC m=+953.257613518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist") pod "speaker-nbgmf" (UID: "2536f291-dea1-4673-acf7-9beaffa87817") : secret "metallb-memberlist" not found Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.072732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.076814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/493ad03c-5e3e-4726-9764-272f39f5aa37-metrics-certs\") pod \"controller-69bbfbf88f-5p2db\" (UID: \"493ad03c-5e3e-4726-9764-272f39f5aa37\") " pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.173554 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.184468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.314972 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.652248 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.731436 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.736452 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.749769 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.790775 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5p2db"] Feb 16 15:09:18 crc kubenswrapper[4705]: W0216 15:09:18.795665 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod493ad03c_5e3e_4726_9764_272f39f5aa37.slice/crio-93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc WatchSource:0}: Error finding container 93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc: Status 404 returned error can't find the container with id 93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.889804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.889884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.890303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.938277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" event={"ID":"751baaae-9090-48b1-9bae-79b7527d6c02","Type":"ContainerStarted","Data":"793f3ad530efaf39bacc4bfe77342b4c42e982f3ef4fd5c9f4be8b8dc92d9390"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.939604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"93b5156da07e3d44c3630f9968680183b0f1dc4e28b7e7b252547cef21d38ccc"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.940920 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"92b2162e0f48fcca515baebca68c2d5aa4544c6953532c9adfa3dbe2968d7588"} Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992553 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.992715 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.993192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:18 crc kubenswrapper[4705]: I0216 15:09:18.993309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.018424 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"redhat-marketplace-zth6f\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.067515 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.094601 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.098943 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2536f291-dea1-4673-acf7-9beaffa87817-memberlist\") pod \"speaker-nbgmf\" (UID: \"2536f291-dea1-4673-acf7-9beaffa87817\") " pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.161632 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.571748 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.961767 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.961733 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" exitCode=0 Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.962121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerStarted","Data":"f29a60f76f78314ae6f0243b56ba9336dc2a65e5f7b3d38a788960b018e46dc8"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"6c12bc25f2a60a4aee880cb078919c06df4b8de3118a7ae2017ae5c67d221f72"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"f4816f341107c3060505d53435ed97b6c6d9e99803ebb0268ac67463ddf586b2"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.971798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nbgmf" event={"ID":"2536f291-dea1-4673-acf7-9beaffa87817","Type":"ContainerStarted","Data":"04cc7ce51873b70b99303f445c7e94b5f5fcb72d05693c43a6e904b3e5e88f2a"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.972529 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977025 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"5eb2353ebfda386e81122e2d07c8766ba45e4afbc1f8523702af45415b969bf4"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977083 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5p2db" event={"ID":"493ad03c-5e3e-4726-9764-272f39f5aa37","Type":"ContainerStarted","Data":"4fb4d5fd5e2b4eb26eb30f528546ce5ad47659d9d506c68881b0a513f6c5e8d9"} Feb 16 15:09:19 crc kubenswrapper[4705]: I0216 15:09:19.977201 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.023241 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nbgmf" podStartSLOduration=3.023216986 podStartE2EDuration="3.023216986s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:20.020461168 +0000 UTC m=+954.205438244" watchObservedRunningTime="2026-02-16 15:09:20.023216986 +0000 UTC m=+954.208194052" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.040619 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-5p2db" podStartSLOduration=3.040597162 podStartE2EDuration="3.040597162s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:20.03913058 +0000 UTC m=+954.224107656" watchObservedRunningTime="2026-02-16 15:09:20.040597162 +0000 UTC m=+954.225574238" Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.995254 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" exitCode=0 Feb 16 15:09:20 crc kubenswrapper[4705]: I0216 15:09:20.995688 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73"} Feb 16 15:09:22 crc kubenswrapper[4705]: I0216 15:09:22.011147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerStarted","Data":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} Feb 16 15:09:26 crc kubenswrapper[4705]: I0216 15:09:26.450911 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zth6f" podStartSLOduration=7.003202296 podStartE2EDuration="8.450881382s" podCreationTimestamp="2026-02-16 15:09:18 +0000 UTC" firstStartedPulling="2026-02-16 15:09:19.964241935 +0000 UTC m=+954.149219011" lastFinishedPulling="2026-02-16 15:09:21.411921021 +0000 UTC m=+955.596898097" observedRunningTime="2026-02-16 15:09:22.05478839 +0000 UTC m=+956.239765466" watchObservedRunningTime="2026-02-16 15:09:26.450881382 +0000 UTC m=+960.635858468" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.067728 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="b0eba72150775e1eda2c6ab0ac0dc2708448ef609b78997dde76ea7b87ee5681" exitCode=0 Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.068471 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"b0eba72150775e1eda2c6ab0ac0dc2708448ef609b78997dde76ea7b87ee5681"} Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.071548 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" event={"ID":"751baaae-9090-48b1-9bae-79b7527d6c02","Type":"ContainerStarted","Data":"de95d36f5c2de8b320d55953fec50186a6ab8e32f534acd984ffd3a5b9a0336e"} Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.071782 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.129117 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" podStartSLOduration=2.345289917 podStartE2EDuration="11.129084891s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="2026-02-16 15:09:18.67023586 +0000 UTC m=+952.855212936" lastFinishedPulling="2026-02-16 15:09:27.454030824 +0000 UTC m=+961.639007910" observedRunningTime="2026-02-16 15:09:28.116478272 +0000 UTC m=+962.301455388" watchObservedRunningTime="2026-02-16 15:09:28.129084891 +0000 UTC m=+962.314061977" Feb 16 15:09:28 crc kubenswrapper[4705]: I0216 15:09:28.320475 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-5p2db" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.070628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.070942 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.084422 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="36bc395f2f2fd2a8a7b9e39bbda23ccf4cc8a04b5fe04924c0feaaeaa6c5c84d" exitCode=0 Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.085832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"36bc395f2f2fd2a8a7b9e39bbda23ccf4cc8a04b5fe04924c0feaaeaa6c5c84d"} Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.162624 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.168154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nbgmf" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.260215 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:29 crc kubenswrapper[4705]: I0216 15:09:29.724568 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:30 crc kubenswrapper[4705]: I0216 15:09:30.112259 4705 generic.go:334] "Generic (PLEG): container finished" podID="06291746-6582-464c-9dff-b4b98a359885" containerID="2579975ed16e4b45dfa3bef1c777bf3fdb95652c7ead4055d0e82f27daedb0b7" exitCode=0 Feb 16 15:09:30 crc kubenswrapper[4705]: I0216 15:09:30.112464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerDied","Data":"2579975ed16e4b45dfa3bef1c777bf3fdb95652c7ead4055d0e82f27daedb0b7"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.126439 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"1852a63991dc3e9a894aed9ddb064bb3d9a4d69e9db18e2a142ea37b17fd6331"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"ac4bf677259900dfde7bdeedca51debd8d12b29304a9143c53e7dc60ab251821"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"42e0332232b7a36b3d9580c5fe4d06a42bbeff722e569eb804f410b113854522"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"f1c16ad4524940644bda9374a4bfc51482be02e9b704219d1993c87d7703ffb0"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.127200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"b59781b6c3b2224974d529886ed0603072bcbeed73e567e8014c0d7ca1d530d7"} Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.126538 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zth6f" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" containerID="cri-o://9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" gracePeriod=2 Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.614609 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.777859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778596 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778626 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") pod \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\" (UID: \"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd\") " Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.778891 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities" (OuterVolumeSpecName: "utilities") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.779673 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.788298 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc" (OuterVolumeSpecName: "kube-api-access-rv2sc") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "kube-api-access-rv2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.801602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" (UID: "f4d8fc5f-9372-47ef-9b1f-913bb6e319fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.881200 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:31 crc kubenswrapper[4705]: I0216 15:09:31.881245 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv2sc\" (UniqueName: \"kubernetes.io/projected/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd-kube-api-access-rv2sc\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.142174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5znjj" event={"ID":"06291746-6582-464c-9dff-b4b98a359885","Type":"ContainerStarted","Data":"0947eb28a0ca7d84bdb8938d709066e9928c4dfea34b71403f0c5772e4088ae6"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.142503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146610 4705 generic.go:334] "Generic (PLEG): container finished" podID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" exitCode=0 Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146724 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zth6f" event={"ID":"f4d8fc5f-9372-47ef-9b1f-913bb6e319fd","Type":"ContainerDied","Data":"f29a60f76f78314ae6f0243b56ba9336dc2a65e5f7b3d38a788960b018e46dc8"} Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146750 4705 scope.go:117] "RemoveContainer" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.146972 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zth6f" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.166900 4705 scope.go:117] "RemoveContainer" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.245704 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5znjj" podStartSLOduration=6.149947455 podStartE2EDuration="15.245681804s" podCreationTimestamp="2026-02-16 15:09:17 +0000 UTC" firstStartedPulling="2026-02-16 15:09:18.335849735 +0000 UTC m=+952.520826811" lastFinishedPulling="2026-02-16 15:09:27.431584084 +0000 UTC m=+961.616561160" observedRunningTime="2026-02-16 15:09:32.214415303 +0000 UTC m=+966.399392379" watchObservedRunningTime="2026-02-16 15:09:32.245681804 +0000 UTC m=+966.430658880" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.246585 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.251874 4705 scope.go:117] "RemoveContainer" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.252638 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zth6f"] Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.284631 4705 scope.go:117] "RemoveContainer" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.288879 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": container with ID starting with 9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd not found: ID does not exist" containerID="9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.288938 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd"} err="failed to get container status \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": rpc error: code = NotFound desc = could not find container \"9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd\": container with ID starting with 9ca78dfbbb27110178bfc17d4f2bd38accee918ae9f6e07bc8a4ec0c1a5ee7cd not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.288987 4705 scope.go:117] "RemoveContainer" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.347986 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": container with ID starting with 076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73 not found: ID does not exist" containerID="076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.348067 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73"} err="failed to get container status \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": rpc error: code = NotFound desc = could not find container \"076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73\": container with ID starting with 076f7a88b6bda43fd6d2aeccfc2beba07e6ce07f01e521a73e25aaa1f8b66d73 not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.348116 4705 scope.go:117] "RemoveContainer" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: E0216 15:09:32.350060 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": container with ID starting with 3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b not found: ID does not exist" containerID="3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.350124 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b"} err="failed to get container status \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": rpc error: code = NotFound desc = could not find container \"3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b\": container with ID starting with 3223461f3e9b35c0da57093df6401506c79b60f32fbb8a87a4d3525dba02cb2b not found: ID does not exist" Feb 16 15:09:32 crc kubenswrapper[4705]: I0216 15:09:32.429643 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" path="/var/lib/kubelet/pods/f4d8fc5f-9372-47ef-9b1f-913bb6e319fd/volumes" Feb 16 15:09:33 crc kubenswrapper[4705]: I0216 15:09:33.185496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:33 crc kubenswrapper[4705]: I0216 15:09:33.241839 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.730660 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731333 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-utilities" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731346 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-utilities" Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731400 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731407 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: E0216 15:09:34.731424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-content" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731431 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="extract-content" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.731577 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d8fc5f-9372-47ef-9b1f-913bb6e319fd" containerName="registry-server" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.732188 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.735060 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.735164 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.738438 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-krnkd" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.743917 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.836126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.939143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:34 crc kubenswrapper[4705]: I0216 15:09:34.963426 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxqdd\" (UniqueName: \"kubernetes.io/projected/050e9b74-0e40-4a1a-8cb8-1ee038752bb6-kube-api-access-gxqdd\") pod \"openstack-operator-index-rtf6z\" (UID: \"050e9b74-0e40-4a1a-8cb8-1ee038752bb6\") " pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:35 crc kubenswrapper[4705]: I0216 15:09:35.053927 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:35 crc kubenswrapper[4705]: I0216 15:09:35.595023 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtf6z"] Feb 16 15:09:35 crc kubenswrapper[4705]: W0216 15:09:35.598500 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050e9b74_0e40_4a1a_8cb8_1ee038752bb6.slice/crio-7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef WatchSource:0}: Error finding container 7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef: Status 404 returned error can't find the container with id 7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef Feb 16 15:09:36 crc kubenswrapper[4705]: I0216 15:09:36.201470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtf6z" event={"ID":"050e9b74-0e40-4a1a-8cb8-1ee038752bb6","Type":"ContainerStarted","Data":"7f608f5debacb5e78bea380c8482710d812bd97dfa07859909e289363d1810ef"} Feb 16 15:09:38 crc kubenswrapper[4705]: I0216 15:09:38.184464 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-x4255" Feb 16 15:09:39 crc kubenswrapper[4705]: I0216 15:09:39.245497 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtf6z" event={"ID":"050e9b74-0e40-4a1a-8cb8-1ee038752bb6","Type":"ContainerStarted","Data":"6105fd8b0dda2549ad134eeceae8eb65d69a3a77be1c4f9dd5149617fd46d539"} Feb 16 15:09:39 crc kubenswrapper[4705]: I0216 15:09:39.272892 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rtf6z" podStartSLOduration=2.547057474 podStartE2EDuration="5.272836522s" podCreationTimestamp="2026-02-16 15:09:34 +0000 UTC" firstStartedPulling="2026-02-16 15:09:35.605332534 +0000 UTC m=+969.790309620" lastFinishedPulling="2026-02-16 15:09:38.331111552 +0000 UTC m=+972.516088668" observedRunningTime="2026-02-16 15:09:39.270910438 +0000 UTC m=+973.455887574" watchObservedRunningTime="2026-02-16 15:09:39.272836522 +0000 UTC m=+973.457813648" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.054941 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.055861 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.097200 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:45 crc kubenswrapper[4705]: I0216 15:09:45.335779 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rtf6z" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.383819 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.388061 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.390417 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-96tph" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.401120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543471 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.543731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.645953 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.646113 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.646229 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.647006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.647023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.667232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:46 crc kubenswrapper[4705]: I0216 15:09:46.710844 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:47 crc kubenswrapper[4705]: I0216 15:09:47.277932 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw"] Feb 16 15:09:47 crc kubenswrapper[4705]: I0216 15:09:47.326206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerStarted","Data":"b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9"} Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.190704 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5znjj" Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.336156 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="c7d00f9d8b8279528eee34c4b4d573aa302d1c0e7f059cd1885fcea5b5543c4c" exitCode=0 Feb 16 15:09:48 crc kubenswrapper[4705]: I0216 15:09:48.336317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"c7d00f9d8b8279528eee34c4b4d573aa302d1c0e7f059cd1885fcea5b5543c4c"} Feb 16 15:09:49 crc kubenswrapper[4705]: I0216 15:09:49.351235 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="ecfc92dd12e9735b0cf209b641dd7125c0e01d34f4b2c3cee044137d7e87a423" exitCode=0 Feb 16 15:09:49 crc kubenswrapper[4705]: I0216 15:09:49.351320 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"ecfc92dd12e9735b0cf209b641dd7125c0e01d34f4b2c3cee044137d7e87a423"} Feb 16 15:09:50 crc kubenswrapper[4705]: I0216 15:09:50.398988 4705 generic.go:334] "Generic (PLEG): container finished" podID="1e942955-af48-4230-98dd-d8228e586600" containerID="1ec075e0b56c346b8aa17d7294bacadcf0d6aec224cca6ac22a5fa5b8bf01109" exitCode=0 Feb 16 15:09:50 crc kubenswrapper[4705]: I0216 15:09:50.399432 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"1ec075e0b56c346b8aa17d7294bacadcf0d6aec224cca6ac22a5fa5b8bf01109"} Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.710613 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.861838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.861985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.862113 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") pod \"1e942955-af48-4230-98dd-d8228e586600\" (UID: \"1e942955-af48-4230-98dd-d8228e586600\") " Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.862904 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle" (OuterVolumeSpecName: "bundle") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.868636 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9" (OuterVolumeSpecName: "kube-api-access-bz9l9") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "kube-api-access-bz9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.881292 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util" (OuterVolumeSpecName: "util") pod "1e942955-af48-4230-98dd-d8228e586600" (UID: "1e942955-af48-4230-98dd-d8228e586600"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964526 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz9l9\" (UniqueName: \"kubernetes.io/projected/1e942955-af48-4230-98dd-d8228e586600-kube-api-access-bz9l9\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964920 4705 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:51 crc kubenswrapper[4705]: I0216 15:09:51.964990 4705 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e942955-af48-4230-98dd-d8228e586600-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.424472 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.441312 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw" event={"ID":"1e942955-af48-4230-98dd-d8228e586600","Type":"ContainerDied","Data":"b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9"} Feb 16 15:09:52 crc kubenswrapper[4705]: I0216 15:09:52.441398 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84edfa3737949e5d206452d63b90b8c94b5f5507690ed9b4a240228bc5efca9" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.149219 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150329 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150344 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150452 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="pull" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150459 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="pull" Feb 16 15:09:55 crc kubenswrapper[4705]: E0216 15:09:55.150476 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="util" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150483 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="util" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.150643 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e942955-af48-4230-98dd-d8228e586600" containerName="extract" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.151393 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.154427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zpvqs" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.172964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.243916 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.345383 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.367405 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkn2\" (UniqueName: \"kubernetes.io/projected/a8b2ba76-e9d9-404f-9859-22c40c63f1fb-kube-api-access-6wkn2\") pod \"openstack-operator-controller-init-787c798d66-r8xk2\" (UID: \"a8b2ba76-e9d9-404f-9859-22c40c63f1fb\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.471967 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.738032 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.740070 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.811185 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863694 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.863724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.965976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.966819 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.967011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:55 crc kubenswrapper[4705]: I0216 15:09:55.996703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"community-operators-ftls8\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.044709 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2"] Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.071385 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.475833 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" event={"ID":"a8b2ba76-e9d9-404f-9859-22c40c63f1fb","Type":"ContainerStarted","Data":"b35be9f54d11b2a61633a473e64debec951b744404005198956a4f5b4f213f02"} Feb 16 15:09:56 crc kubenswrapper[4705]: I0216 15:09:56.677208 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:09:56 crc kubenswrapper[4705]: W0216 15:09:56.697532 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf177069e_fdb0_44b5_a098_948bbb859bbc.slice/crio-d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722 WatchSource:0}: Error finding container d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722: Status 404 returned error can't find the container with id d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722 Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495354 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" exitCode=0 Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495750 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc"} Feb 16 15:09:57 crc kubenswrapper[4705]: I0216 15:09:57.495785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerStarted","Data":"d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.535618 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.535722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.538407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" event={"ID":"a8b2ba76-e9d9-404f-9859-22c40c63f1fb","Type":"ContainerStarted","Data":"825c95f4f1de8d5d902374e685350cbaaecb434eb6759ce16fc24439c2ed116f"} Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.538684 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:10:01 crc kubenswrapper[4705]: I0216 15:10:01.611432 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" podStartSLOduration=1.796564769 podStartE2EDuration="6.61140816s" podCreationTimestamp="2026-02-16 15:09:55 +0000 UTC" firstStartedPulling="2026-02-16 15:09:56.051600979 +0000 UTC m=+990.236578055" lastFinishedPulling="2026-02-16 15:10:00.86644436 +0000 UTC m=+995.051421446" observedRunningTime="2026-02-16 15:10:01.610564126 +0000 UTC m=+995.795541232" watchObservedRunningTime="2026-02-16 15:10:01.61140816 +0000 UTC m=+995.796385246" Feb 16 15:10:02 crc kubenswrapper[4705]: I0216 15:10:02.554604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerStarted","Data":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} Feb 16 15:10:02 crc kubenswrapper[4705]: I0216 15:10:02.579137 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftls8" podStartSLOduration=2.963906813 podStartE2EDuration="7.579115421s" podCreationTimestamp="2026-02-16 15:09:55 +0000 UTC" firstStartedPulling="2026-02-16 15:09:57.499643706 +0000 UTC m=+991.684620782" lastFinishedPulling="2026-02-16 15:10:02.114852314 +0000 UTC m=+996.299829390" observedRunningTime="2026-02-16 15:10:02.572362009 +0000 UTC m=+996.757339105" watchObservedRunningTime="2026-02-16 15:10:02.579115421 +0000 UTC m=+996.764092497" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.072789 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.073898 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:06 crc kubenswrapper[4705]: I0216 15:10:06.158931 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:15 crc kubenswrapper[4705]: I0216 15:10:15.474281 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-787c798d66-r8xk2" Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.159524 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.261078 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:16 crc kubenswrapper[4705]: I0216 15:10:16.723118 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftls8" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" containerID="cri-o://74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" gracePeriod=2 Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.170987 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284743 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.284968 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") pod \"f177069e-fdb0-44b5-a098-948bbb859bbc\" (UID: \"f177069e-fdb0-44b5-a098-948bbb859bbc\") " Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.285868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities" (OuterVolumeSpecName: "utilities") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.291655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv" (OuterVolumeSpecName: "kube-api-access-jsmhv") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "kube-api-access-jsmhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.333620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f177069e-fdb0-44b5-a098-948bbb859bbc" (UID: "f177069e-fdb0-44b5-a098-948bbb859bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395061 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395133 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsmhv\" (UniqueName: \"kubernetes.io/projected/f177069e-fdb0-44b5-a098-948bbb859bbc-kube-api-access-jsmhv\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.395153 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f177069e-fdb0-44b5-a098-948bbb859bbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735277 4705 generic.go:334] "Generic (PLEG): container finished" podID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" exitCode=0 Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735417 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftls8" event={"ID":"f177069e-fdb0-44b5-a098-948bbb859bbc","Type":"ContainerDied","Data":"d6405cff6e546d169c0b1f495bf632be75ae8c10439d0c913cb76b368f727722"} Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735454 4705 scope.go:117] "RemoveContainer" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.735484 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftls8" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.794175 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.796253 4705 scope.go:117] "RemoveContainer" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.805782 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftls8"] Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.829337 4705 scope.go:117] "RemoveContainer" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880024 4705 scope.go:117] "RemoveContainer" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.880800 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": container with ID starting with 74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642 not found: ID does not exist" containerID="74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880844 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642"} err="failed to get container status \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": rpc error: code = NotFound desc = could not find container \"74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642\": container with ID starting with 74e224541eb0933222aa5b73e9e2b85d953c64ebc1f5880049d0c4858b640642 not found: ID does not exist" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.880874 4705 scope.go:117] "RemoveContainer" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.881246 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": container with ID starting with 866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54 not found: ID does not exist" containerID="866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881277 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54"} err="failed to get container status \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": rpc error: code = NotFound desc = could not find container \"866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54\": container with ID starting with 866616b35a135e0935612a17ce648dc5e2660580f0ecd70466af8ac5ef72ed54 not found: ID does not exist" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881295 4705 scope.go:117] "RemoveContainer" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: E0216 15:10:17.881582 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": container with ID starting with 12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc not found: ID does not exist" containerID="12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc" Feb 16 15:10:17 crc kubenswrapper[4705]: I0216 15:10:17.881606 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc"} err="failed to get container status \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": rpc error: code = NotFound desc = could not find container \"12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc\": container with ID starting with 12e39e3a450747daae25e98bbee39724b84a4af3b9070c9c921836ecac8c5cbc not found: ID does not exist" Feb 16 15:10:18 crc kubenswrapper[4705]: I0216 15:10:18.432075 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" path="/var/lib/kubelet/pods/f177069e-fdb0-44b5-a098-948bbb859bbc/volumes" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.468291 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469423 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-utilities" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469438 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-utilities" Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469450 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-content" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="extract-content" Feb 16 15:10:35 crc kubenswrapper[4705]: E0216 15:10:35.469483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469490 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.469708 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f177069e-fdb0-44b5-a098-948bbb859bbc" containerName="registry-server" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.470481 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.477620 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-x552h" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.479059 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.480651 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.484035 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xmpx2" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.486009 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.492497 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.493710 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.500690 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tpf2v" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.535973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.546430 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567260 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567386 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.567435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.587177 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.588489 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.591388 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-26vj4" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.596688 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.598125 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.602454 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sqfcj" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.608611 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.627321 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.628826 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.633565 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qx6x4" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.663462 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671134 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.671978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.698980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqdn\" (UniqueName: \"kubernetes.io/projected/1b9942d1-9e1e-436b-8a58-e37d6b55a00b-kube-api-access-hqqdn\") pod \"barbican-operator-controller-manager-868647ff47-f52r7\" (UID: \"1b9942d1-9e1e-436b-8a58-e37d6b55a00b\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.699358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.715160 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfccs\" (UniqueName: \"kubernetes.io/projected/84edc365-fa2c-40bc-ae0e-b71ae094ab27-kube-api-access-gfccs\") pod \"cinder-operator-controller-manager-5d946d989d-s9vdm\" (UID: \"84edc365-fa2c-40bc-ae0e-b71ae094ab27\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.716678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6t8\" (UniqueName: \"kubernetes.io/projected/f0b4e27c-91ff-4540-bfff-e6c30849c75f-kube-api-access-5m6t8\") pod \"designate-operator-controller-manager-6d8bf5c495-fsx2w\" (UID: \"f0b4e27c-91ff-4540-bfff-e6c30849c75f\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.721138 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.722257 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.724857 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-k7ftx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.737843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.749583 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.760473 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.778168 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.796147 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9lpc6" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.812568 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.812696 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.815735 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816595 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816655 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.816829 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.868611 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.871167 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.878627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8qd\" (UniqueName: \"kubernetes.io/projected/5ee1a78f-cea6-443b-9b43-9ed2334c5c9e-kube-api-access-fl8qd\") pod \"heat-operator-controller-manager-69f49c598c-f4fgx\" (UID: \"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.901268 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xnsf9" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.903522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqbjx\" (UniqueName: \"kubernetes.io/projected/f1a4206b-818d-49e7-9177-9dc7373ded1c-kube-api-access-dqbjx\") pod \"horizon-operator-controller-manager-5b9b8895d5-q5n45\" (UID: \"f1a4206b-818d-49e7-9177-9dc7373ded1c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.918427 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.919424 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jw5s\" (UniqueName: \"kubernetes.io/projected/59e2a9a8-5a0d-4772-8d9c-b755fcd234be-kube-api-access-8jw5s\") pod \"glance-operator-controller-manager-77987464f4-xdlbv\" (UID: \"59e2a9a8-5a0d-4772-8d9c-b755fcd234be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.928820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.943458 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.944979 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.952280 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqzh5\" (UniqueName: \"kubernetes.io/projected/a6d65371-bf15-42b9-857d-c4c7350aa402-kube-api-access-mqzh5\") pod \"ironic-operator-controller-manager-554564d7fc-ftdcn\" (UID: \"a6d65371-bf15-42b9-857d-c4c7350aa402\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.961777 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:35 crc kubenswrapper[4705]: I0216 15:10:35.968948 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.019520 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.021001 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.026920 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tw72f" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027126 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027461 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.027615 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.027683 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.027755 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:36.527730721 +0000 UTC m=+1030.712707797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.028980 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.034608 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hlp4w" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.046126 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.056652 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.058110 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.061747 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t6bmm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.062031 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698r9\" (UniqueName: \"kubernetes.io/projected/9bd1689a-ae93-4ac0-ab21-c899756ef07a-kube-api-access-698r9\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.087997 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.129205 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132418 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132535 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.132626 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.142197 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.159474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84pqc\" (UniqueName: \"kubernetes.io/projected/34eadd57-e91b-4324-93c0-ede339012ab3-kube-api-access-84pqc\") pod \"keystone-operator-controller-manager-b4d948c87-8lztr\" (UID: \"34eadd57-e91b-4324-93c0-ede339012ab3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.165658 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.170865 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.175058 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8vjr8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.177094 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.178655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.182675 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ct77r" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234480 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234529 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.234610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.243695 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.259151 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.259731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5l7\" (UniqueName: \"kubernetes.io/projected/e73efbc6-26db-4760-b745-3c93c9b2329e-kube-api-access-8q5l7\") pod \"mariadb-operator-controller-manager-6994f66f48-kh759\" (UID: \"e73efbc6-26db-4760-b745-3c93c9b2329e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.277486 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.278999 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.279080 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxl4\" (UniqueName: \"kubernetes.io/projected/f06e9156-0c7b-41f6-a1cf-83820a7e7732-kube-api-access-dvxl4\") pod \"manila-operator-controller-manager-54f6768c69-dnbpd\" (UID: \"f06e9156-0c7b-41f6-a1cf-83820a7e7732\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.286296 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vw46g" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.295802 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336265 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.336324 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.350043 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.374723 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vwfb\" (UniqueName: \"kubernetes.io/projected/9f0ad3cb-ac80-4462-bd97-b09f9367dc54-kube-api-access-8vwfb\") pod \"neutron-operator-controller-manager-64ddbf8bb-2vvm8\" (UID: \"9f0ad3cb-ac80-4462-bd97-b09f9367dc54\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.387854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.391901 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.393606 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.396785 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9wwsz" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.397019 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.408464 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.426153 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437557 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437634 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.437690 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.480493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmxx\" (UniqueName: \"kubernetes.io/projected/7373be90-eefb-4c2b-bdbd-a312daef2434-kube-api-access-bnmxx\") pod \"octavia-operator-controller-manager-69f8888797-zk57l\" (UID: \"7373be90-eefb-4c2b-bdbd-a312daef2434\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.510715 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnt9q\" (UniqueName: \"kubernetes.io/projected/8279d837-6ad4-4e2b-a03a-eb0a24a30998-kube-api-access-rnt9q\") pod \"nova-operator-controller-manager-567668f5cf-b6587\" (UID: \"8279d837-6ad4-4e2b-a03a-eb0a24a30998\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539699 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539888 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.539999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.546153 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.546236 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.546212704 +0000 UTC m=+1031.731189970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.559519 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560799 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.560844 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.561789 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562433 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.565824 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562638 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.562703 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.563330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh952\" (UniqueName: \"kubernetes.io/projected/d4a1c432-7691-472b-80af-caaa6afcacb2-kube-api-access-nh952\") pod \"ovn-operator-controller-manager-d44cf6b75-hw64s\" (UID: \"d4a1c432-7691-472b-80af-caaa6afcacb2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.573672 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-m9bpn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.574028 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kpsnn" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.582916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-n2trf" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.596277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.613140 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.642181 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.658207 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.658453 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.665284 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.642408 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: E0216 15:10:36.665662 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.165599888 +0000 UTC m=+1031.350576964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.669977 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.673526 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rpg8h" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.674754 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.687521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.687686 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.699540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswht\" (UniqueName: \"kubernetes.io/projected/1872b592-a1cc-445a-b75f-f658612dc160-kube-api-access-gswht\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.733979 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.794216 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800547 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800642 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.800926 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.808532 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-v8lz9" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.821325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zvv\" (UniqueName: \"kubernetes.io/projected/794d8603-8fa6-4068-8a38-e0825d42ae3f-kube-api-access-j2zvv\") pod \"placement-operator-controller-manager-8497b45c89-vkmgq\" (UID: \"794d8603-8fa6-4068-8a38-e0825d42ae3f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.823817 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbk8\" (UniqueName: \"kubernetes.io/projected/ca67e7ec-20a9-4768-ae37-3aa90f721201-kube-api-access-8pbk8\") pod \"swift-operator-controller-manager-68f46476f-6c6fr\" (UID: \"ca67e7ec-20a9-4768-ae37-3aa90f721201\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.828268 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br499\" (UniqueName: \"kubernetes.io/projected/8d4c4ad7-542f-4d25-a444-7b4752e43f89-kube-api-access-br499\") pod \"telemetry-operator-controller-manager-6ccb9b958b-qbt7j\" (UID: \"8d4c4ad7-542f-4d25-a444-7b4752e43f89\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.830509 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.908428 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.908908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.926780 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.938586 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.967475 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p5hh\" (UniqueName: \"kubernetes.io/projected/c66cb2ee-a6d3-454b-a2ea-a160038b76f6-kube-api-access-9p5hh\") pod \"test-operator-controller-manager-7866795846-bk9rm\" (UID: \"c66cb2ee-a6d3-454b-a2ea-a160038b76f6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.979176 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.981096 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:36 crc kubenswrapper[4705]: I0216 15:10:36.992078 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.000760 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.011187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.014706 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.014965 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b5p6j" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.015252 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.027923 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.054671 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkq7\" (UniqueName: \"kubernetes.io/projected/d583ac10-9ad2-4f95-9787-74f2cb28c943-kube-api-access-mjkq7\") pod \"watcher-operator-controller-manager-5db88f68c-77d2l\" (UID: \"d583ac10-9ad2-4f95-9787-74f2cb28c943\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.075752 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.077255 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.084714 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.088050 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d2bn5" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.113536 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.113917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.120393 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.145460 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.192444 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237831 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237895 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.237944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.238090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.238149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238355 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238465 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.23843417 +0000 UTC m=+1032.423411246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238755 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238805 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.73878901 +0000 UTC m=+1031.923766076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238856 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.238890 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:37.738881613 +0000 UTC m=+1031.923858689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.257858 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.294353 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrf9c\" (UniqueName: \"kubernetes.io/projected/07891331-9fdb-4922-aea1-6a3acf7f656f-kube-api-access-zrf9c\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.351156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.412936 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9sh\" (UniqueName: \"kubernetes.io/projected/d67e5221-5cd4-4659-a41b-5d470f435c3e-kube-api-access-bq9sh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5s9ck\" (UID: \"d67e5221-5cd4-4659-a41b-5d470f435c3e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.451902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.561968 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.563709 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.563763 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:39.563746785 +0000 UTC m=+1033.748723861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.768635 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.768870 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.768927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.768960 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.768939326 +0000 UTC m=+1032.953916392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.769774 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: E0216 15:10:37.769865 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:38.769842572 +0000 UTC m=+1032.954819828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.890418 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.923424 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.935084 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w"] Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.949803 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr"] Feb 16 15:10:37 crc kubenswrapper[4705]: W0216 15:10:37.978789 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34eadd57_e91b_4324_93c0_ede339012ab3.slice/crio-fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529 WatchSource:0}: Error finding container fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529: Status 404 returned error can't find the container with id fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529 Feb 16 15:10:37 crc kubenswrapper[4705]: W0216 15:10:37.980274 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b4e27c_91ff_4540_bfff_e6c30849c75f.slice/crio-6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea WatchSource:0}: Error finding container 6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea: Status 404 returned error can't find the container with id 6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.991496 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" event={"ID":"84edc365-fa2c-40bc-ae0e-b71ae094ab27","Type":"ContainerStarted","Data":"71c77b6249de7bf666267115eee47697de039f8777efddbd412fddb2d4f335e4"} Feb 16 15:10:37 crc kubenswrapper[4705]: I0216 15:10:37.998262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" event={"ID":"f1a4206b-818d-49e7-9177-9dc7373ded1c","Type":"ContainerStarted","Data":"882be1bf4b81d928fb77017cdcb45594b1ef9b78db0197ef17df96be6b44eaf7"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.004293 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.013674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" event={"ID":"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e","Type":"ContainerStarted","Data":"56fb7f87cc952bfe9df4b5094af90d90feff05c4c3e0d26258650fd59ce5e9e1"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.015962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" event={"ID":"1b9942d1-9e1e-436b-8a58-e37d6b55a00b","Type":"ContainerStarted","Data":"84cbeea1b8569314e3a39d19a6c3a81960c05b9ed365d5254499a7b0a3c593d6"} Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.178894 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.200424 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.262236 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.310827 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.311045 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.311110 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.311093834 +0000 UTC m=+1034.496070910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.543281 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.563146 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-b6587"] Feb 16 15:10:38 crc kubenswrapper[4705]: W0216 15:10:38.572333 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8279d837_6ad4_4e2b_a03a_eb0a24a30998.slice/crio-0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c WatchSource:0}: Error finding container 0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c: Status 404 returned error can't find the container with id 0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.604478 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.611248 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l"] Feb 16 15:10:38 crc kubenswrapper[4705]: W0216 15:10:38.662308 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7373be90_eefb_4c2b_bdbd_a312daef2434.slice/crio-5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2 WatchSource:0}: Error finding container 5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2: Status 404 returned error can't find the container with id 5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2 Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.822664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.823143 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823336 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823401 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.823383781 +0000 UTC m=+1035.008360857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823776 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: E0216 15:10:38.823808 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:40.823800002 +0000 UTC m=+1035.008777078 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.921975 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.948154 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq"] Feb 16 15:10:38 crc kubenswrapper[4705]: I0216 15:10:38.981459 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-bk9rm"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.010725 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.044770 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.109670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j"] Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.112315 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" event={"ID":"e73efbc6-26db-4760-b745-3c93c9b2329e","Type":"ContainerStarted","Data":"f6b915c7b7aaeaa24ad5d28f57edad862cae1a3a23b0775e952534bdb6f05ab5"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.118856 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" event={"ID":"a6d65371-bf15-42b9-857d-c4c7350aa402","Type":"ContainerStarted","Data":"07ab2daec4cd1119e94220cf4a6e5648aae2f86209abc64857868e36703902c5"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.166436 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" event={"ID":"9f0ad3cb-ac80-4462-bd97-b09f9367dc54","Type":"ContainerStarted","Data":"c27554cf31fd824856e9c4d0d610a41b7e54d540006b10b19075ac1a6099dcf4"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.169181 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" event={"ID":"f0b4e27c-91ff-4540-bfff-e6c30849c75f","Type":"ContainerStarted","Data":"6c83711f155a713c81139048dd75ef6cae14a37e9e23a00913ac912e9d8318ea"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.171625 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bq9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5s9ck_openstack-operators(d67e5221-5cd4-4659-a41b-5d470f435c3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.171916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" event={"ID":"f06e9156-0c7b-41f6-a1cf-83820a7e7732","Type":"ContainerStarted","Data":"21edcf6feeea5a9d0e65e6f05a309d694fd65f17fff3c96f93509357337e456d"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.173032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.175304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" event={"ID":"34eadd57-e91b-4324-93c0-ede339012ab3","Type":"ContainerStarted","Data":"fc1f98378e5f11da16ab5dbaa99154b8e15fef44808620bf55830e344f565529"} Feb 16 15:10:39 crc kubenswrapper[4705]: W0216 15:10:39.201919 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d4c4ad7_542f_4d25_a444_7b4752e43f89.slice/crio-0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878 WatchSource:0}: Error finding container 0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878: Status 404 returned error can't find the container with id 0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878 Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.209262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" event={"ID":"59e2a9a8-5a0d-4772-8d9c-b755fcd234be","Type":"ContainerStarted","Data":"b73872e77f66a39eee1575c5ee3d8f38ac806a620df51b664b46eeeee35e64be"} Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.210483 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-br499,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6ccb9b958b-qbt7j_openstack-operators(8d4c4ad7-542f-4d25-a444-7b4752e43f89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.211608 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.213393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" event={"ID":"8279d837-6ad4-4e2b-a03a-eb0a24a30998","Type":"ContainerStarted","Data":"0ce1b6f4b06ddcef363b2f69e26bee286cff0854df33526f1a42c63c0d8a806c"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.215726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" event={"ID":"d4a1c432-7691-472b-80af-caaa6afcacb2","Type":"ContainerStarted","Data":"858d5984e85f61ee4ef173dcf1aad4a8e9d6ebe913b9361fd59cbae5944ddfeb"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.225955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" event={"ID":"7373be90-eefb-4c2b-bdbd-a312daef2434","Type":"ContainerStarted","Data":"5769ab6c98393e088e9b85a18cf50620cf4bfc26eca3b70476ee6a82c08c4ad2"} Feb 16 15:10:39 crc kubenswrapper[4705]: I0216 15:10:39.661090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.661596 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:39 crc kubenswrapper[4705]: E0216 15:10:39.661661 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:43.661644241 +0000 UTC m=+1037.846621317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.383462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" event={"ID":"c66cb2ee-a6d3-454b-a2ea-a160038b76f6","Type":"ContainerStarted","Data":"e7d724620ab28912120b1d0e926f4bc8de254b44a90930caeea1b9953e3e8b6c"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.385826 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.386132 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.386212 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.38619104 +0000 UTC m=+1038.571168116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.397568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" event={"ID":"ca67e7ec-20a9-4768-ae37-3aa90f721201","Type":"ContainerStarted","Data":"83f13fa1f1d7b8fd9cdb6a74b177b498a4f2d071f3d06f1410b0b9e8b508fd5b"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.454853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" event={"ID":"d583ac10-9ad2-4f95-9787-74f2cb28c943","Type":"ContainerStarted","Data":"2d0bf6215441a1b8402ca1dd3be8ae24eeeb60ec87a954fcd4f4d59c921b608a"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.473152 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" event={"ID":"8d4c4ad7-542f-4d25-a444-7b4752e43f89","Type":"ContainerStarted","Data":"0ecea067f607b3899bc3a5a0881b814f78da418dbdb8e56a01e2232763373878"} Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.479001 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.494328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" event={"ID":"d67e5221-5cd4-4659-a41b-5d470f435c3e","Type":"ContainerStarted","Data":"09e0d1c5ec7f1a07494f5c8c6a3b29b423b52d17aef5bdf97721f8bf6c65887c"} Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.499799 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.537172 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" event={"ID":"794d8603-8fa6-4068-8a38-e0825d42ae3f","Type":"ContainerStarted","Data":"3f6311770b658b79200ae03dd84f08003c81190aa7de83d04d5ef3927e2992f8"} Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.897466 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:40 crc kubenswrapper[4705]: I0216 15:10:40.897756 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897764 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897867 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.897841657 +0000 UTC m=+1039.082818733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.897930 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:40 crc kubenswrapper[4705]: E0216 15:10:40.898004 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:44.897984821 +0000 UTC m=+1039.082961897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:41 crc kubenswrapper[4705]: E0216 15:10:41.560220 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.102:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podUID="8d4c4ad7-542f-4d25-a444-7b4752e43f89" Feb 16 15:10:41 crc kubenswrapper[4705]: E0216 15:10:41.561555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podUID="d67e5221-5cd4-4659-a41b-5d470f435c3e" Feb 16 15:10:43 crc kubenswrapper[4705]: I0216 15:10:43.676225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:43 crc kubenswrapper[4705]: E0216 15:10:43.676444 4705 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:43 crc kubenswrapper[4705]: E0216 15:10:43.676565 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert podName:9bd1689a-ae93-4ac0-ab21-c899756ef07a nodeName:}" failed. No retries permitted until 2026-02-16 15:10:51.676543631 +0000 UTC m=+1045.861520707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert") pod "infra-operator-controller-manager-79d975b745-xg4dw" (UID: "9bd1689a-ae93-4ac0-ab21-c899756ef07a") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.394228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.394919 4705 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.395033 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert podName:1872b592-a1cc-445a-b75f-f658612dc160 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.395007815 +0000 UTC m=+1046.579984891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" (UID: "1872b592-a1cc-445a-b75f-f658612dc160") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.908709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:44 crc kubenswrapper[4705]: I0216 15:10:44.908821 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909016 4705 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909077 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.909058273 +0000 UTC m=+1047.094035349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "metrics-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909461 4705 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:44 crc kubenswrapper[4705]: E0216 15:10:44.909500 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs podName:07891331-9fdb-4922-aea1-6a3acf7f656f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:52.909490265 +0000 UTC m=+1047.094467341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-zrvmj" (UID: "07891331-9fdb-4922-aea1-6a3acf7f656f") : secret "webhook-server-cert" not found Feb 16 15:10:51 crc kubenswrapper[4705]: I0216 15:10:51.756183 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:51 crc kubenswrapper[4705]: I0216 15:10:51.766191 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bd1689a-ae93-4ac0-ab21-c899756ef07a-cert\") pod \"infra-operator-controller-manager-79d975b745-xg4dw\" (UID: \"9bd1689a-ae93-4ac0-ab21-c899756ef07a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.958444 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.958718 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqzh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-ftdcn_openstack-operators(a6d65371-bf15-42b9-857d-c4c7350aa402): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:51 crc kubenswrapper[4705]: E0216 15:10:51.960511 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podUID="a6d65371-bf15-42b9-857d-c4c7350aa402" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.026696 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.472263 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.482213 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1872b592-a1cc-445a-b75f-f658612dc160-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq\" (UID: \"1872b592-a1cc-445a-b75f-f658612dc160\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.674820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.681100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podUID="a6d65371-bf15-42b9-857d-c4c7350aa402" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.853619 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.853900 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-f52r7_openstack-operators(1b9942d1-9e1e-436b-8a58-e37d6b55a00b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:52 crc kubenswrapper[4705]: E0216 15:10:52.855017 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podUID="1b9942d1-9e1e-436b-8a58-e37d6b55a00b" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.983882 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.984032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:52 crc kubenswrapper[4705]: I0216 15:10:52.990462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: I0216 15:10:53.000154 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07891331-9fdb-4922-aea1-6a3acf7f656f-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-zrvmj\" (UID: \"07891331-9fdb-4922-aea1-6a3acf7f656f\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: I0216 15:10:53.214387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.694839 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podUID="1b9942d1-9e1e-436b-8a58-e37d6b55a00b" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.731286 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.731587 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjkq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-77d2l_openstack-operators(d583ac10-9ad2-4f95-9787-74f2cb28c943): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:53 crc kubenswrapper[4705]: E0216 15:10:53.732856 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podUID="d583ac10-9ad2-4f95-9787-74f2cb28c943" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.360961 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.361268 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8q5l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-kh759_openstack-operators(e73efbc6-26db-4760-b745-3c93c9b2329e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.363362 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podUID="e73efbc6-26db-4760-b745-3c93c9b2329e" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.702886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podUID="e73efbc6-26db-4760-b745-3c93c9b2329e" Feb 16 15:10:54 crc kubenswrapper[4705]: E0216 15:10:54.702897 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podUID="d583ac10-9ad2-4f95-9787-74f2cb28c943" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.946653 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.947434 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvxl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-dnbpd_openstack-operators(f06e9156-0c7b-41f6-a1cf-83820a7e7732): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:56 crc kubenswrapper[4705]: E0216 15:10:56.948941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podUID="f06e9156-0c7b-41f6-a1cf-83820a7e7732" Feb 16 15:10:57 crc kubenswrapper[4705]: E0216 15:10:57.727103 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podUID="f06e9156-0c7b-41f6-a1cf-83820a7e7732" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.851782 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.852549 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqbjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-q5n45_openstack-operators(f1a4206b-818d-49e7-9177-9dc7373ded1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:58 crc kubenswrapper[4705]: E0216 15:10:58.853764 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podUID="f1a4206b-818d-49e7-9177-9dc7373ded1c" Feb 16 15:10:59 crc kubenswrapper[4705]: E0216 15:10:59.747277 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podUID="f1a4206b-818d-49e7-9177-9dc7373ded1c" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.803673 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.804676 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9p5hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-bk9rm_openstack-operators(c66cb2ee-a6d3-454b-a2ea-a160038b76f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:00 crc kubenswrapper[4705]: E0216 15:11:00.806026 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podUID="c66cb2ee-a6d3-454b-a2ea-a160038b76f6" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.461265 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.461628 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nh952,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-hw64s_openstack-operators(d4a1c432-7691-472b-80af-caaa6afcacb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.462871 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podUID="d4a1c432-7691-472b-80af-caaa6afcacb2" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.766125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podUID="c66cb2ee-a6d3-454b-a2ea-a160038b76f6" Feb 16 15:11:01 crc kubenswrapper[4705]: E0216 15:11:01.767670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podUID="d4a1c432-7691-472b-80af-caaa6afcacb2" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.165007 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.165251 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pbk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6c6fr_openstack-operators(ca67e7ec-20a9-4768-ae37-3aa90f721201): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.166512 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podUID="ca67e7ec-20a9-4768-ae37-3aa90f721201" Feb 16 15:11:02 crc kubenswrapper[4705]: E0216 15:11:02.781455 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podUID="ca67e7ec-20a9-4768-ae37-3aa90f721201" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.214436 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.215039 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnmxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-zk57l_openstack-operators(7373be90-eefb-4c2b-bdbd-a312daef2434): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.216392 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podUID="7373be90-eefb-4c2b-bdbd-a312daef2434" Feb 16 15:11:03 crc kubenswrapper[4705]: E0216 15:11:03.796441 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podUID="7373be90-eefb-4c2b-bdbd-a312daef2434" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.553420 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.553948 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jw5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-xdlbv_openstack-operators(59e2a9a8-5a0d-4772-8d9c-b755fcd234be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.556544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podUID="59e2a9a8-5a0d-4772-8d9c-b755fcd234be" Feb 16 15:11:05 crc kubenswrapper[4705]: E0216 15:11:05.815713 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podUID="59e2a9a8-5a0d-4772-8d9c-b755fcd234be" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.189022 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.189253 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84pqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-8lztr_openstack-operators(34eadd57-e91b-4324-93c0-ede339012ab3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.191339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podUID="34eadd57-e91b-4324-93c0-ede339012ab3" Feb 16 15:11:06 crc kubenswrapper[4705]: E0216 15:11:06.824420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podUID="34eadd57-e91b-4324-93c0-ede339012ab3" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.753172 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.753745 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnt9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-b6587_openstack-operators(8279d837-6ad4-4e2b-a03a-eb0a24a30998): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.755213 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podUID="8279d837-6ad4-4e2b-a03a-eb0a24a30998" Feb 16 15:11:08 crc kubenswrapper[4705]: E0216 15:11:08.880550 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podUID="8279d837-6ad4-4e2b-a03a-eb0a24a30998" Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.212734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw"] Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.884094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" event={"ID":"9bd1689a-ae93-4ac0-ab21-c899756ef07a","Type":"ContainerStarted","Data":"02a33bc9560ba627451b465a76120b11857961a8c985b83240446e9db08c2627"} Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.922324 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq"] Feb 16 15:11:09 crc kubenswrapper[4705]: I0216 15:11:09.986131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj"] Feb 16 15:11:10 crc kubenswrapper[4705]: W0216 15:11:10.183038 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1872b592_a1cc_445a_b75f_f658612dc160.slice/crio-edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658 WatchSource:0}: Error finding container edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658: Status 404 returned error can't find the container with id edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658 Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.912722 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" event={"ID":"07891331-9fdb-4922-aea1-6a3acf7f656f","Type":"ContainerStarted","Data":"c0388a91e8104ecd452db96ed97457e8f6ad6c3149150248281ab915a1bf221e"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.913149 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" event={"ID":"07891331-9fdb-4922-aea1-6a3acf7f656f","Type":"ContainerStarted","Data":"00a332e1694035de770f854f75759759e0a7a681a9785f2d2412ef442f9a34d9"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.914479 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.923440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" event={"ID":"f0b4e27c-91ff-4540-bfff-e6c30849c75f","Type":"ContainerStarted","Data":"3b6ec758ca3e96a2800ff59221eb969d8073fea14bc66f751cb0b8ee1d67966d"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.924356 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.936674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" event={"ID":"1872b592-a1cc-445a-b75f-f658612dc160","Type":"ContainerStarted","Data":"edeb3f7b34cbe5466d8259156677ba53dfa1f994606cf96d465ea52dad191658"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.953651 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" event={"ID":"5ee1a78f-cea6-443b-9b43-9ed2334c5c9e","Type":"ContainerStarted","Data":"886d09b73747919bed7e7c1cc82c961d6bff011bd64be69bc95e204af2e2fa7c"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.954864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.956345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" event={"ID":"d583ac10-9ad2-4f95-9787-74f2cb28c943","Type":"ContainerStarted","Data":"30e901058a65ca78e4b2071132f2ea5301f7898067e2263382868ce7f7573bec"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.957498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.967728 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" event={"ID":"1b9942d1-9e1e-436b-8a58-e37d6b55a00b","Type":"ContainerStarted","Data":"70a51012dbb0f26f2386d4d9f843820be6b6a8980664fa15a57df1704dbc6cfb"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.968489 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.982431 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" event={"ID":"e73efbc6-26db-4760-b745-3c93c9b2329e","Type":"ContainerStarted","Data":"e6d775580a1ff4966c5f8b78051c26adcb74e5b0844d99c4634f3d29852170ea"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.983508 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.984738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" event={"ID":"84edc365-fa2c-40bc-ae0e-b71ae094ab27","Type":"ContainerStarted","Data":"c273eba925bfb5987af04b3e7438808c96b1ca182bf3e54ec9fd7621601fe915"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.985175 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.986929 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" event={"ID":"794d8603-8fa6-4068-8a38-e0825d42ae3f","Type":"ContainerStarted","Data":"04f520d38cd740f487a4ee0f874f679eb7e666034e72c8d4fec754fd2a85b0ca"} Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.987149 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:11:10 crc kubenswrapper[4705]: I0216 15:11:10.990896 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" podStartSLOduration=34.990879415 podStartE2EDuration="34.990879415s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:10.974459182 +0000 UTC m=+1065.159436258" watchObservedRunningTime="2026-02-16 15:11:10.990879415 +0000 UTC m=+1065.175856491" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.010078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" event={"ID":"9f0ad3cb-ac80-4462-bd97-b09f9367dc54","Type":"ContainerStarted","Data":"99eb9a85eef51d842fc7c7af7df01eea7d9cfa79a658b4a6af9be0dd230d248d"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.011028 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.014469 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" podStartSLOduration=5.768179625 podStartE2EDuration="36.014455979s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.905577232 +0000 UTC m=+1032.090554308" lastFinishedPulling="2026-02-16 15:11:08.151853586 +0000 UTC m=+1062.336830662" observedRunningTime="2026-02-16 15:11:11.009763087 +0000 UTC m=+1065.194740163" watchObservedRunningTime="2026-02-16 15:11:11.014455979 +0000 UTC m=+1065.199433055" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.031647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" event={"ID":"8d4c4ad7-542f-4d25-a444-7b4752e43f89","Type":"ContainerStarted","Data":"a37c138621a40bad4a022cf4aec5313c8a095e1a3ccd124227862dcd9fb4212b"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.032822 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.044212 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" podStartSLOduration=3.668708489 podStartE2EDuration="36.044187626s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.918743177 +0000 UTC m=+1032.103720253" lastFinishedPulling="2026-02-16 15:11:10.294222314 +0000 UTC m=+1064.479199390" observedRunningTime="2026-02-16 15:11:11.035696867 +0000 UTC m=+1065.220673943" watchObservedRunningTime="2026-02-16 15:11:11.044187626 +0000 UTC m=+1065.229164702" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.046646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" event={"ID":"d67e5221-5cd4-4659-a41b-5d470f435c3e","Type":"ContainerStarted","Data":"802b5d982eb2a2824d8a315a61f754dc128cf0d902f081586255ed15685f8e02"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.055712 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" event={"ID":"a6d65371-bf15-42b9-857d-c4c7350aa402","Type":"ContainerStarted","Data":"a28f73b409f77661d439c8f4462c43c745659cc60cfb24b10b12f6d93b752170"} Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.056700 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.066660 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" podStartSLOduration=5.905581495 podStartE2EDuration="36.066636859s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.990764511 +0000 UTC m=+1032.175741587" lastFinishedPulling="2026-02-16 15:11:08.151819855 +0000 UTC m=+1062.336796951" observedRunningTime="2026-02-16 15:11:11.062709398 +0000 UTC m=+1065.247686464" watchObservedRunningTime="2026-02-16 15:11:11.066636859 +0000 UTC m=+1065.251613935" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.124926 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" podStartSLOduration=3.995226665 podStartE2EDuration="35.124896119s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.111099464 +0000 UTC m=+1033.296076540" lastFinishedPulling="2026-02-16 15:11:10.240768918 +0000 UTC m=+1064.425745994" observedRunningTime="2026-02-16 15:11:11.109758473 +0000 UTC m=+1065.294735549" watchObservedRunningTime="2026-02-16 15:11:11.124896119 +0000 UTC m=+1065.309873195" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.221697 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" podStartSLOduration=4.260185208 podStartE2EDuration="36.221665045s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.233341987 +0000 UTC m=+1032.418319063" lastFinishedPulling="2026-02-16 15:11:10.194821814 +0000 UTC m=+1064.379798900" observedRunningTime="2026-02-16 15:11:11.200653383 +0000 UTC m=+1065.385630459" watchObservedRunningTime="2026-02-16 15:11:11.221665045 +0000 UTC m=+1065.406642121" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.273887 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" podStartSLOduration=7.230830559 podStartE2EDuration="36.273855375s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.110784455 +0000 UTC m=+1033.295761531" lastFinishedPulling="2026-02-16 15:11:08.153809261 +0000 UTC m=+1062.338786347" observedRunningTime="2026-02-16 15:11:11.246513705 +0000 UTC m=+1065.431490781" watchObservedRunningTime="2026-02-16 15:11:11.273855375 +0000 UTC m=+1065.458832451" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.339394 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" podStartSLOduration=5.144732827 podStartE2EDuration="36.33935049s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:36.958488078 +0000 UTC m=+1031.143465164" lastFinishedPulling="2026-02-16 15:11:08.153105751 +0000 UTC m=+1062.338082827" observedRunningTime="2026-02-16 15:11:11.325655754 +0000 UTC m=+1065.510632830" watchObservedRunningTime="2026-02-16 15:11:11.33935049 +0000 UTC m=+1065.524327566" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.456533 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" podStartSLOduration=4.456224659 podStartE2EDuration="36.456489679s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.20435002 +0000 UTC m=+1032.389327096" lastFinishedPulling="2026-02-16 15:11:10.20461504 +0000 UTC m=+1064.389592116" observedRunningTime="2026-02-16 15:11:11.440782426 +0000 UTC m=+1065.625759502" watchObservedRunningTime="2026-02-16 15:11:11.456489679 +0000 UTC m=+1065.641466745" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.496982 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" podStartSLOduration=5.427758501 podStartE2EDuration="36.491402582s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.210300792 +0000 UTC m=+1033.395277868" lastFinishedPulling="2026-02-16 15:11:10.273944873 +0000 UTC m=+1064.458921949" observedRunningTime="2026-02-16 15:11:11.48776839 +0000 UTC m=+1065.672745466" watchObservedRunningTime="2026-02-16 15:11:11.491402582 +0000 UTC m=+1065.676379658" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.543252 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5s9ck" podStartSLOduration=4.420684559 podStartE2EDuration="35.543233172s" podCreationTimestamp="2026-02-16 15:10:36 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.171500406 +0000 UTC m=+1033.356477482" lastFinishedPulling="2026-02-16 15:11:10.294049009 +0000 UTC m=+1064.479026095" observedRunningTime="2026-02-16 15:11:11.539936069 +0000 UTC m=+1065.724913155" watchObservedRunningTime="2026-02-16 15:11:11.543233172 +0000 UTC m=+1065.728210248" Feb 16 15:11:11 crc kubenswrapper[4705]: I0216 15:11:11.592263 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" podStartSLOduration=7.092199475 podStartE2EDuration="36.592246072s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.653332562 +0000 UTC m=+1032.838309638" lastFinishedPulling="2026-02-16 15:11:08.153379159 +0000 UTC m=+1062.338356235" observedRunningTime="2026-02-16 15:11:11.590701479 +0000 UTC m=+1065.775678565" watchObservedRunningTime="2026-02-16 15:11:11.592246072 +0000 UTC m=+1065.777223148" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.086104 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" event={"ID":"f1a4206b-818d-49e7-9177-9dc7373ded1c","Type":"ContainerStarted","Data":"5d57fcc57a5792fb93ce1f1f6a3dd54a202d2e83574ff7d0f17bcb3eec786412"} Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.087087 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.094987 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" event={"ID":"f06e9156-0c7b-41f6-a1cf-83820a7e7732","Type":"ContainerStarted","Data":"88c0ce3a4dee1d6fdc271f499cdeb940241dabfca5aa9a0d8fcd431f503ecd19"} Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.095756 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.112438 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" podStartSLOduration=3.143099187 podStartE2EDuration="39.112410861s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:37.147014383 +0000 UTC m=+1031.331991459" lastFinishedPulling="2026-02-16 15:11:13.116326057 +0000 UTC m=+1067.301303133" observedRunningTime="2026-02-16 15:11:14.106873785 +0000 UTC m=+1068.291850871" watchObservedRunningTime="2026-02-16 15:11:14.112410861 +0000 UTC m=+1068.297387937" Feb 16 15:11:14 crc kubenswrapper[4705]: I0216 15:11:14.136757 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" podStartSLOduration=4.416593823 podStartE2EDuration="39.136727376s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.290662611 +0000 UTC m=+1032.475639677" lastFinishedPulling="2026-02-16 15:11:13.010796154 +0000 UTC m=+1067.195773230" observedRunningTime="2026-02-16 15:11:14.128937987 +0000 UTC m=+1068.313915103" watchObservedRunningTime="2026-02-16 15:11:14.136727376 +0000 UTC m=+1068.321704492" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.819910 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f52r7" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.820270 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-fsx2w" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.825085 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-s9vdm" Feb 16 15:11:15 crc kubenswrapper[4705]: I0216 15:11:15.960042 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-f4fgx" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.116241 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" event={"ID":"c66cb2ee-a6d3-454b-a2ea-a160038b76f6","Type":"ContainerStarted","Data":"c64349a54a7e60292c5ea466997d0709f7abb50d08b9910714aefd138c7e4c4a"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.117770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.119752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" event={"ID":"9bd1689a-ae93-4ac0-ab21-c899756ef07a","Type":"ContainerStarted","Data":"baecbcc39cb32374016576b48d7c2e30efbf65d1ca3d0699c79b79ee7b705069"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.119865 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.122104 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" event={"ID":"1872b592-a1cc-445a-b75f-f658612dc160","Type":"ContainerStarted","Data":"5a1a32e1f569f196520b32cb3315cc745de1f9db08d98119108bc01428cc9407"} Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.122299 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.144977 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" podStartSLOduration=4.603596258 podStartE2EDuration="41.144945707s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.088322485 +0000 UTC m=+1033.273299561" lastFinishedPulling="2026-02-16 15:11:15.629671934 +0000 UTC m=+1069.814649010" observedRunningTime="2026-02-16 15:11:16.143249069 +0000 UTC m=+1070.328226145" watchObservedRunningTime="2026-02-16 15:11:16.144945707 +0000 UTC m=+1070.329922783" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.146357 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ftdcn" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.209682 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" podStartSLOduration=35.743891478 podStartE2EDuration="41.209650459s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:11:10.194153775 +0000 UTC m=+1064.379130851" lastFinishedPulling="2026-02-16 15:11:15.659912746 +0000 UTC m=+1069.844889832" observedRunningTime="2026-02-16 15:11:16.181917628 +0000 UTC m=+1070.366894704" watchObservedRunningTime="2026-02-16 15:11:16.209650459 +0000 UTC m=+1070.394627535" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.212700 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" podStartSLOduration=35.029828827 podStartE2EDuration="41.212673934s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:11:09.477071079 +0000 UTC m=+1063.662048165" lastFinishedPulling="2026-02-16 15:11:15.659916196 +0000 UTC m=+1069.844893272" observedRunningTime="2026-02-16 15:11:16.20256636 +0000 UTC m=+1070.387543436" watchObservedRunningTime="2026-02-16 15:11:16.212673934 +0000 UTC m=+1070.397651010" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.413084 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kh759" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.451990 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2vvm8" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.931803 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-qbt7j" Feb 16 15:11:16 crc kubenswrapper[4705]: I0216 15:11:16.947357 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vkmgq" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.145834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" event={"ID":"d4a1c432-7691-472b-80af-caaa6afcacb2","Type":"ContainerStarted","Data":"bfc3f9ca887b472519251656402b6ecd440d6adbbcc6a32960895a97fb04f49b"} Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.147040 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.169015 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" podStartSLOduration=4.748918575 podStartE2EDuration="42.168993818s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.559037463 +0000 UTC m=+1032.744014529" lastFinishedPulling="2026-02-16 15:11:15.979112696 +0000 UTC m=+1070.164089772" observedRunningTime="2026-02-16 15:11:17.16692012 +0000 UTC m=+1071.351897206" watchObservedRunningTime="2026-02-16 15:11:17.168993818 +0000 UTC m=+1071.353970894" Feb 16 15:11:17 crc kubenswrapper[4705]: I0216 15:11:17.266182 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-77d2l" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.158272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" event={"ID":"7373be90-eefb-4c2b-bdbd-a312daef2434","Type":"ContainerStarted","Data":"27e6eedaccb9ab708cd6338f682159b8d96abdbcbfe78114130d44004c17b8cd"} Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.158892 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.161208 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" event={"ID":"ca67e7ec-20a9-4768-ae37-3aa90f721201","Type":"ContainerStarted","Data":"12c8f52b838f5d0ee99eca55dbae3b7837c74ef9fe6bfb7f995ca068ba68cdbb"} Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.182521 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" podStartSLOduration=4.71935496 podStartE2EDuration="43.182498113s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.665940461 +0000 UTC m=+1032.850917537" lastFinishedPulling="2026-02-16 15:11:17.129083614 +0000 UTC m=+1071.314060690" observedRunningTime="2026-02-16 15:11:18.180679122 +0000 UTC m=+1072.365656208" watchObservedRunningTime="2026-02-16 15:11:18.182498113 +0000 UTC m=+1072.367475189" Feb 16 15:11:18 crc kubenswrapper[4705]: I0216 15:11:18.206582 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" podStartSLOduration=5.157662393 podStartE2EDuration="43.206555871s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:39.087934564 +0000 UTC m=+1033.272911640" lastFinishedPulling="2026-02-16 15:11:17.136828032 +0000 UTC m=+1071.321805118" observedRunningTime="2026-02-16 15:11:18.202163517 +0000 UTC m=+1072.387140603" watchObservedRunningTime="2026-02-16 15:11:18.206555871 +0000 UTC m=+1072.391532957" Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.171273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" event={"ID":"59e2a9a8-5a0d-4772-8d9c-b755fcd234be","Type":"ContainerStarted","Data":"d3adc4667521059be5b629406c458a39d7d58140309107d790c3bc419ea0fd6c"} Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.172054 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:11:19 crc kubenswrapper[4705]: I0216 15:11:19.192299 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" podStartSLOduration=3.320353179 podStartE2EDuration="44.192277423s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.03703756 +0000 UTC m=+1032.222014636" lastFinishedPulling="2026-02-16 15:11:18.908961764 +0000 UTC m=+1073.093938880" observedRunningTime="2026-02-16 15:11:19.185133052 +0000 UTC m=+1073.370110138" watchObservedRunningTime="2026-02-16 15:11:19.192277423 +0000 UTC m=+1073.377254499" Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.199304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" event={"ID":"34eadd57-e91b-4324-93c0-ede339012ab3","Type":"ContainerStarted","Data":"d28cbfcacecf469f1cfa8d86454fb022e4204df868a129d8fe15a64f9744de37"} Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.200531 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:11:21 crc kubenswrapper[4705]: I0216 15:11:21.220314 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" podStartSLOduration=4.14332459 podStartE2EDuration="46.22027088s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.013633323 +0000 UTC m=+1032.198610399" lastFinishedPulling="2026-02-16 15:11:20.090579613 +0000 UTC m=+1074.275556689" observedRunningTime="2026-02-16 15:11:21.216413761 +0000 UTC m=+1075.401390837" watchObservedRunningTime="2026-02-16 15:11:21.22027088 +0000 UTC m=+1075.405247956" Feb 16 15:11:22 crc kubenswrapper[4705]: I0216 15:11:22.037316 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-xg4dw" Feb 16 15:11:22 crc kubenswrapper[4705]: I0216 15:11:22.682435 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.223533 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" event={"ID":"8279d837-6ad4-4e2b-a03a-eb0a24a30998","Type":"ContainerStarted","Data":"03b068295e0654ebb37c19c31b73d4a8886a8926c28d589fc7c38ed730fafa87"} Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.224148 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.224331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-zrvmj" Feb 16 15:11:23 crc kubenswrapper[4705]: I0216 15:11:23.247663 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" podStartSLOduration=3.988081015 podStartE2EDuration="48.24762078s" podCreationTimestamp="2026-02-16 15:10:35 +0000 UTC" firstStartedPulling="2026-02-16 15:10:38.581158164 +0000 UTC m=+1032.766135240" lastFinishedPulling="2026-02-16 15:11:22.840697889 +0000 UTC m=+1077.025675005" observedRunningTime="2026-02-16 15:11:23.241053355 +0000 UTC m=+1077.426030451" watchObservedRunningTime="2026-02-16 15:11:23.24762078 +0000 UTC m=+1077.432597886" Feb 16 15:11:25 crc kubenswrapper[4705]: I0216 15:11:25.932030 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-xdlbv" Feb 16 15:11:25 crc kubenswrapper[4705]: I0216 15:11:25.973508 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-q5n45" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.300456 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8lztr" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.394326 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-dnbpd" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.616323 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zk57l" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.678893 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-hw64s" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.992903 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:11:26 crc kubenswrapper[4705]: I0216 15:11:26.996835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6c6fr" Feb 16 15:11:27 crc kubenswrapper[4705]: I0216 15:11:27.011202 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-bk9rm" Feb 16 15:11:31 crc kubenswrapper[4705]: I0216 15:11:31.684326 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:11:31 crc kubenswrapper[4705]: I0216 15:11:31.685637 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:11:36 crc kubenswrapper[4705]: I0216 15:11:36.601514 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-b6587" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.918173 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.925197 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.931091 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932039 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932336 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qrmjt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.932862 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.933085 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.945985 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.946788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.946894 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.948838 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.956941 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 15:11:58 crc kubenswrapper[4705]: I0216 15:11:58.971252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049159 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.049397 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.050458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.071101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"dnsmasq-dns-675f4bcbfc-b59zw\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152150 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.152625 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.153530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.153539 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.172942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"dnsmasq-dns-78dd6ddcc-22j4x\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.270855 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.279804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.765178 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:11:59 crc kubenswrapper[4705]: I0216 15:11:59.851758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:11:59 crc kubenswrapper[4705]: W0216 15:11:59.855645 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ebe5f1b_1a13_4172_8662_aeae2c43ade1.slice/crio-94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a WatchSource:0}: Error finding container 94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a: Status 404 returned error can't find the container with id 94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a Feb 16 15:12:00 crc kubenswrapper[4705]: I0216 15:12:00.607094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" event={"ID":"6ebe5f1b-1a13-4172-8662-aeae2c43ade1","Type":"ContainerStarted","Data":"94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a"} Feb 16 15:12:00 crc kubenswrapper[4705]: I0216 15:12:00.608732 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" event={"ID":"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27","Type":"ContainerStarted","Data":"0d5732ad1582d0dc0f1a09eb172ef4f895ed8673cbf8cf85d9d7eaad2e583287"} Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.685854 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.686242 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.751778 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.783852 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.785448 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.824681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988043 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988147 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:01 crc kubenswrapper[4705]: I0216 15:12:01.988230 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.090282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.091896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.091941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.148360 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.158946 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"dnsmasq-dns-666b6646f7-zdn4j\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.192000 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.210351 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.287532 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.349302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.409426 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450736 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.450939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.451942 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.452206 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.494255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"dnsmasq-dns-57d769cc4f-crh45\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.546468 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.978167 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:02 crc kubenswrapper[4705]: I0216 15:12:02.989298 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.017648 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.017697 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-st4tw" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025165 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025505 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025929 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.025979 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.029652 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.053505 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.066556 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.071738 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.087342 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.091169 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.104713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105008 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105124 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105171 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105279 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105420 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.105449 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.119780 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.133755 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.142734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209195 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209616 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.209987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.210080 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211215 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211383 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211545 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211619 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211640 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211721 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211752 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211977 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.211998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212048 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212133 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212229 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212319 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212377 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212515 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.212584 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.213856 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.213919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.214269 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.215521 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.220758 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.221180 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.221212 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6913a5af6e0b901f5e41cc9da5820d3446361504ddf8a58e3143477836427e51/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.222414 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.222714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.243188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.244313 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.260309 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.296897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.316798 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318160 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318240 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318294 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318324 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318352 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318405 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318523 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318594 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318664 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318703 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.318790 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.319354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.320672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.321450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.327013 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329089 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329537 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.329875 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.331007 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.331192 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.332225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.332463 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.334000 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.334879 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.337509 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.337622 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e04bcb153e3e04f037e1fc841d6f137a96f2052e5c7d3319ec9bf09db685a60/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.339076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.343687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.348397 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.354429 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359299 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.359514 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75a91b98174d7040097f89a93bfd5946d971fbacf68f20932d87234b8e73eca0/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.360016 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.361910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.363945 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364406 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364613 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jzl8w" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364745 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.364890 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.365078 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.365217 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.366536 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.390941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.395911 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.398505 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423746 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423841 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423887 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.423972 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424014 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424304 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.424500 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.494641 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533894 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.533934 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.534050 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536408 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536564 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536606 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536725 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536751 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.536799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.537189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.539913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.541241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.541534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.549772 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.550682 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.554284 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.566972 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.567038 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15fddb9283d0361ec376f6d3697b3a7dae141e971c813fd76f875f1c98aad2dc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.571082 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.571905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.575775 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.632398 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.698229 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.729236 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.729861 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.743474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.751237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerStarted","Data":"483f41b8e768070c0e3971042788df02650602d14770eb6fc300e60a9f3c1c36"} Feb 16 15:12:03 crc kubenswrapper[4705]: I0216 15:12:03.762594 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" event={"ID":"d7dbc743-b65f-414c-adef-c3e8e158e4dc","Type":"ContainerStarted","Data":"cba1b72db61c105e5863e586d645a2f7e94a83ed46db96da197a374840b783e3"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.089009 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.536493 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.539049 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-bxd9j" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.551553 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.552660 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.561814 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.579596 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.603957 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693662 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693779 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693814 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693880 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.693930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.781510 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.788633 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"c10aeda896c97ab2b56b22cb8e034aaa58126bfac49a954b06a32ef9f4316ccc"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.794116 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"ad93a17a230e0f89ffb728c848e626d65cc868f03d8c72f03802d0c82854159a"} Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795814 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795862 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795908 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.795960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796149 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.796685 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/50502923-5ef9-46a9-a23d-abe8face6040-config-data-generated\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.799428 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-operator-scripts\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.800136 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-config-data-default\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.801303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/50502923-5ef9-46a9-a23d-abe8face6040-kolla-config\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.812087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.819929 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/50502923-5ef9-46a9-a23d-abe8face6040-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.842980 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.855210 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.855262 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b84ac327ec17a2e5247227ffa0b0ce2e626f629e87314080a000575c7f56c493/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 15:12:04 crc kubenswrapper[4705]: I0216 15:12:04.862363 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z88j\" (UniqueName: \"kubernetes.io/projected/50502923-5ef9-46a9-a23d-abe8face6040-kube-api-access-7z88j\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.032242 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67aab641-5214-49de-9a0b-3806f71b983d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67aab641-5214-49de-9a0b-3806f71b983d\") pod \"openstack-galera-0\" (UID: \"50502923-5ef9-46a9-a23d-abe8face6040\") " pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.196743 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.843304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"9536c4826f2994651344a9956c3c00d2cb404777160d90908e2937cd52e8fb5f"} Feb 16 15:12:05 crc kubenswrapper[4705]: I0216 15:12:05.848657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"ba74fdfcb7efec48976e7232011d375059db8616337cd4b51be00bbb131415c9"} Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.110557 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.121784 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.126412 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.126776 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.127123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pg6t9" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.127365 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.171449 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.210664 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.218666 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.221280 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7z2kg" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.221569 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.226810 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.242808 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251316 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251388 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.251573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.272113 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356604 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356759 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356792 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356824 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356871 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356938 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358046 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.356969 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.358979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.359013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.360804 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.361270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.366253 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.371270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.372214 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.385777 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.385869 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6751ac2a32a11bd99c4c7a4a92851db593f531ecbf0ccd549987b595b7d4796d/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.391509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv29m\" (UniqueName: \"kubernetes.io/projected/616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab-kube-api-access-rv29m\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: W0216 15:12:06.403466 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50502923_5ef9_46a9_a23d_abe8face6040.slice/crio-a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d WatchSource:0}: Error finding container a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d: Status 404 returned error can't find the container with id a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.454259 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a4db6acc-1871-432c-93a8-6774473ae15f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4db6acc-1871-432c-93a8-6774473ae15f\") pod \"openstack-cell1-galera-0\" (UID: \"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482873 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.482927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.483035 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.484233 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.484501 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-kolla-config\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.485118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db14762a-eebd-41a0-b107-e879fedc05f1-config-data\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.515209 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db14762a-eebd-41a0-b107-e879fedc05f1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.520326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.624285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nflbx\" (UniqueName: \"kubernetes.io/projected/db14762a-eebd-41a0-b107-e879fedc05f1-kube-api-access-nflbx\") pod \"memcached-0\" (UID: \"db14762a-eebd-41a0-b107-e879fedc05f1\") " pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.849893 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:12:06 crc kubenswrapper[4705]: I0216 15:12:06.958762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"a8e77084552df314e6bf7d1574fc9b66862eb9611d3bf0ea4678019797f18f4d"} Feb 16 15:12:07 crc kubenswrapper[4705]: I0216 15:12:07.303163 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:12:07 crc kubenswrapper[4705]: W0216 15:12:07.354531 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616bbda0_7abf_4cfb_b7f8_f8cca8fb5eab.slice/crio-e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67 WatchSource:0}: Error finding container e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67: Status 404 returned error can't find the container with id e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67 Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:07.992742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"e0c98519d13faeb9bb646c3ae5e43bacaff3ed79ce7d9bc314c70b87ff627e67"} Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.582172 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.583777 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.602517 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-p4v2d" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.630790 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.722337 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.834942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.898595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"kube-state-metrics-0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:08.941225 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.780063 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.883359 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.885285 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.889240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-6zgbs" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.890940 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.917219 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.979770 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:09 crc kubenswrapper[4705]: I0216 15:12:09.979967 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089088 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: E0216 15:12:10.089247 4705 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 16 15:12:10 crc kubenswrapper[4705]: E0216 15:12:10.089303 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert podName:72697fcc-cd94-4ba9-9479-cb5bd82d83ab nodeName:}" failed. No retries permitted until 2026-02-16 15:12:10.589283962 +0000 UTC m=+1124.774261038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert") pod "observability-ui-dashboards-66cbf594b5-9hcns" (UID: "72697fcc-cd94-4ba9-9479-cb5bd82d83ab") : secret "observability-ui-dashboards" not found Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.089354 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.158194 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlmrv\" (UniqueName: \"kubernetes.io/projected/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-kube-api-access-vlmrv\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.220833 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.236564 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.245864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.246248 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bs5tf" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248282 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248706 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248778 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.248857 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.254685 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.266882 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298190 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298246 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298277 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298320 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298374 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298410 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.298524 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.299403 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.363465 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.365009 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.414837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.414970 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415429 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415505 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.415609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417076 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417161 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417241 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.417574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.418622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.419211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.426212 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.436107 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.450347 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.456190 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.460860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.467456 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.468386 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.495142 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.522935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.523519 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524016 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524125 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.524574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-oauth-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.525549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-service-ca\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.529708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.535567 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.535611 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/88c6cd7cb604a645ab31c0e76d113b8c44ff69d3e39fcb5b354218108db12562/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.536404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeed7723-4cdc-478c-870c-d0e7df3c5673-trusted-ca-bundle\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.549433 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-oauth-config\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.564650 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eeed7723-4cdc-478c-870c-d0e7df3c5673-console-serving-cert\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.565116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnr6f\" (UniqueName: \"kubernetes.io/projected/eeed7723-4cdc-478c-870c-d0e7df3c5673-kube-api-access-dnr6f\") pod \"console-6b7cd49558-h4srk\" (UID: \"eeed7723-4cdc-478c-870c-d0e7df3c5673\") " pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.628344 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.635557 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72697fcc-cd94-4ba9-9479-cb5bd82d83ab-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-9hcns\" (UID: \"72697fcc-cd94-4ba9-9479-cb5bd82d83ab\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.690931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.724375 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.832007 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" Feb 16 15:12:10 crc kubenswrapper[4705]: I0216 15:12:10.879779 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.276033 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.277914 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290283 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6f7th" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.290661 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.311678 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369826 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.369904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.484226 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495773 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495867 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.495988 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.496032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.496314 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.498709 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.503000 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.504345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-run\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.504645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-var-log-ovn\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.546624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-scripts\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.557340 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-ovn-controller-tls-certs\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.564801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2gtc\" (UniqueName: \"kubernetes.io/projected/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-kube-api-access-k2gtc\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.570137 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4374b7db-8c42-42e1-b2bd-c633bdd8edfd-combined-ca-bundle\") pod \"ovn-controller-crbv8\" (UID: \"4374b7db-8c42-42e1-b2bd-c633bdd8edfd\") " pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.634823 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635131 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.635635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.641512 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.672769 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743315 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743395 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743739 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-lib\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.743971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-log\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.744027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-var-run\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.744149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/be538ffa-cfea-445d-872f-1a0a68b77a50-etc-ovs\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.745818 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be538ffa-cfea-445d-872f-1a0a68b77a50-scripts\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.795649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfmz\" (UniqueName: \"kubernetes.io/projected/be538ffa-cfea-445d-872f-1a0a68b77a50-kube-api-access-hqfmz\") pod \"ovn-controller-ovs-pc9sf\" (UID: \"be538ffa-cfea-445d-872f-1a0a68b77a50\") " pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:11 crc kubenswrapper[4705]: I0216 15:12:11.959180 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.285952 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.289101 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.293069 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8kb4p" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.293404 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.295221 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.296175 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.296261 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.324802 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.366971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367441 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367528 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.367926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.368292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.470890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471004 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471051 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471087 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471116 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.471238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.472204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.472674 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-config\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.473448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e54f9b0-7b03-46de-8c76-2a37e44a02df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.479868 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.479916 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a99574cc9e6913add35f0972791bc48bd808b6223c56c5c3ef1a6b5805e6404/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.490334 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlvg\" (UniqueName: \"kubernetes.io/projected/1e54f9b0-7b03-46de-8c76-2a37e44a02df-kube-api-access-7nlvg\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.492188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.492870 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.511019 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e54f9b0-7b03-46de-8c76-2a37e44a02df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.564985 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da9b1113-e8f4-4884-9ed9-057c9955762a\") pod \"ovsdbserver-nb-0\" (UID: \"1e54f9b0-7b03-46de-8c76-2a37e44a02df\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:12 crc kubenswrapper[4705]: I0216 15:12:12.623815 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.823098 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.836312 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.838360 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.839277 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840238 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840324 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8c2hn" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.840935 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987104 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987436 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987502 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:15 crc kubenswrapper[4705]: I0216 15:12:15.987525 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.089948 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090034 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090150 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090182 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090206 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.090232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.091570 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-config\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.091695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.092531 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54e71500-a592-4c97-86c1-4f3f6a4d1b41-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.099434 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.099479 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/47827cdad2e80c3b2c570dce059979f5d8271785a0514c2276ab7f5ef7b1b052/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.102296 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.104092 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.106600 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68mz\" (UniqueName: \"kubernetes.io/projected/54e71500-a592-4c97-86c1-4f3f6a4d1b41-kube-api-access-g68mz\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.107507 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e71500-a592-4c97-86c1-4f3f6a4d1b41-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.150860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-00413287-2052-44e0-8e76-0690fadcc3fc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-00413287-2052-44e0-8e76-0690fadcc3fc\") pod \"ovsdbserver-sb-0\" (UID: \"54e71500-a592-4c97-86c1-4f3f6a4d1b41\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:16 crc kubenswrapper[4705]: I0216 15:12:16.171283 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:18 crc kubenswrapper[4705]: W0216 15:12:18.799979 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb14762a_eebd_41a0_b107_e879fedc05f1.slice/crio-49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a WatchSource:0}: Error finding container 49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a: Status 404 returned error can't find the container with id 49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a Feb 16 15:12:18 crc kubenswrapper[4705]: W0216 15:12:18.813739 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2fcf9e_1bc7_4b0c_aa83_b4d5daafbcf0.slice/crio-75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955 WatchSource:0}: Error finding container 75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955: Status 404 returned error can't find the container with id 75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955 Feb 16 15:12:19 crc kubenswrapper[4705]: I0216 15:12:19.310748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerStarted","Data":"75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955"} Feb 16 15:12:19 crc kubenswrapper[4705]: I0216 15:12:19.314046 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"db14762a-eebd-41a0-b107-e879fedc05f1","Type":"ContainerStarted","Data":"49b0b02afd9feb56ba4a499c492057973bcde0612eba97d2f35b7a32fab0954a"} Feb 16 15:12:23 crc kubenswrapper[4705]: I0216 15:12:23.354301 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.351097 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.351732 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrknb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(f6b410b5-951c-43d2-b846-3fef02ec0f7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.353832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.355759 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.356141 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfwxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(070373d6-b0bd-43e2-bdf5-ca300875e65d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.357453 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.412352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.416672 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.468105 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.471578 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pd25j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3ba19f15-a399-4d4b-bf32-a2a870a660e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:26 crc kubenswrapper[4705]: E0216 15:12:26.478776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" Feb 16 15:12:26 crc kubenswrapper[4705]: I0216 15:12:26.914766 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7cd49558-h4srk"] Feb 16 15:12:27 crc kubenswrapper[4705]: E0216 15:12:27.408466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.684419 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.685272 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.685334 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.686420 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:12:31 crc kubenswrapper[4705]: I0216 15:12:31.686485 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" gracePeriod=600 Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474452 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" exitCode=0 Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474510 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546"} Feb 16 15:12:32 crc kubenswrapper[4705]: I0216 15:12:32.474553 4705 scope.go:117] "RemoveContainer" containerID="edd58db5c11c3fe5d8c13faff30e9ac5edf92d3f3197f975dddc0c31823f6a25" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.582489 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.583247 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n85hb6h67fh66bh689h675h85hc5h5b9hd5h5f9hd4h587h88h8fhdfh8hd6h84h85h65ch59bh5cdh5b4h65h76h54bh9bh75h7h9fhd5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nflbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(db14762a-eebd-41a0-b107-e879fedc05f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:32 crc kubenswrapper[4705]: E0216 15:12:32.584415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="db14762a-eebd-41a0-b107-e879fedc05f1" Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.144284 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:12:33 crc kubenswrapper[4705]: W0216 15:12:33.484231 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e71500_a592_4c97_86c1_4f3f6a4d1b41.slice/crio-6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df WatchSource:0}: Error finding container 6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df: Status 404 returned error can't find the container with id 6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.492990 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"7015bf43b24ac0edcfb8e9b5ae06dfd4fb6a2c4ed1f37ccbce3950e4c8eb9b1c"} Feb 16 15:12:33 crc kubenswrapper[4705]: I0216 15:12:33.494717 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7cd49558-h4srk" event={"ID":"eeed7723-4cdc-478c-870c-d0e7df3c5673","Type":"ContainerStarted","Data":"e761664f181fbe94235d0ac25e4c497d165a77851c154874a4ee2e27379ca601"} Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.506663 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="db14762a-eebd-41a0-b107-e879fedc05f1" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.529621 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.529849 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76xwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-22j4x_openstack(3486f2d2-e6a5-44a0-b804-12f9b9fd6a27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.531120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" podUID="3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.541324 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.541532 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bms9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zdn4j_openstack(d7dbc743-b65f-414c-adef-c3e8e158e4dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.543989 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.628662 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.628894 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s7td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-crh45_openstack(2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.630503 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.650342 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.650635 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gvv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-b59zw_openstack(6ebe5f1b-1a13-4172-8662-aeae2c43ade1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:33 crc kubenswrapper[4705]: E0216 15:12:33.651781 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" podUID="6ebe5f1b-1a13-4172-8662-aeae2c43ade1" Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.162238 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pc9sf"] Feb 16 15:12:34 crc kubenswrapper[4705]: W0216 15:12:34.263236 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe538ffa_cfea_445d_872f_1a0a68b77a50.slice/crio-48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77 WatchSource:0}: Error finding container 48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77: Status 404 returned error can't find the container with id 48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77 Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.366325 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8"] Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.380933 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.393860 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns"] Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400348 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400433 4705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.400573 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdfvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.402281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" Feb 16 15:12:34 crc kubenswrapper[4705]: W0216 15:12:34.407146 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod761a74d6_061c_47dd_b376_b6d6a1906382.slice/crio-0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf WatchSource:0}: Error finding container 0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf: Status 404 returned error can't find the container with id 0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.509266 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"48980dc065ccb6cb0a9e03e40b6bdecfe33219d3798afd739dac8df0f0d7ed77"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.511413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7cd49558-h4srk" event={"ID":"eeed7723-4cdc-478c-870c-d0e7df3c5673","Type":"ContainerStarted","Data":"570fe5e0a26564b86c0dafdca4bd08aba1d9fcfe2a696bf6e121665f1ee5c74c"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.514342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8" event={"ID":"4374b7db-8c42-42e1-b2bd-c633bdd8edfd","Type":"ContainerStarted","Data":"f9058dd413ac7cfea3831d6df5667fadd3a7fa700e156492cd8034af807a3b42"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.520434 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" event={"ID":"72697fcc-cd94-4ba9-9479-cb5bd82d83ab","Type":"ContainerStarted","Data":"f3c882eb84d2e76a027b499c55e561e337efbeaf523fa716e554d4897f609379"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.521770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"6eee0af9e0a9679e0d0589416aab4d0c3d5fc54f9702b7223c78447260e619df"} Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.524029 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf"} Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.528647 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.529039 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" Feb 16 15:12:34 crc kubenswrapper[4705]: E0216 15:12:34.529121 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" Feb 16 15:12:34 crc kubenswrapper[4705]: I0216 15:12:34.551117 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b7cd49558-h4srk" podStartSLOduration=24.551091147 podStartE2EDuration="24.551091147s" podCreationTimestamp="2026-02-16 15:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:34.529794598 +0000 UTC m=+1148.714771724" watchObservedRunningTime="2026-02-16 15:12:34.551091147 +0000 UTC m=+1148.736068223" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.121256 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.140977 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243464 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") pod \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243507 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243786 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") pod \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\" (UID: \"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.243815 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") pod \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\" (UID: \"6ebe5f1b-1a13-4172-8662-aeae2c43ade1\") " Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.247545 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config" (OuterVolumeSpecName: "config") pod "6ebe5f1b-1a13-4172-8662-aeae2c43ade1" (UID: "6ebe5f1b-1a13-4172-8662-aeae2c43ade1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.248108 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config" (OuterVolumeSpecName: "config") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq" (OuterVolumeSpecName: "kube-api-access-76xwq") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "kube-api-access-76xwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4" (OuterVolumeSpecName: "kube-api-access-4gvv4") pod "6ebe5f1b-1a13-4172-8662-aeae2c43ade1" (UID: "6ebe5f1b-1a13-4172-8662-aeae2c43ade1"). InnerVolumeSpecName "kube-api-access-4gvv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.255902 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" (UID: "3486f2d2-e6a5-44a0-b804-12f9b9fd6a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346488 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346527 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346539 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346555 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gvv4\" (UniqueName: \"kubernetes.io/projected/6ebe5f1b-1a13-4172-8662-aeae2c43ade1-kube-api-access-4gvv4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.346570 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76xwq\" (UniqueName: \"kubernetes.io/projected/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27-kube-api-access-76xwq\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.548784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" event={"ID":"6ebe5f1b-1a13-4172-8662-aeae2c43ade1","Type":"ContainerDied","Data":"94d5ff52b59cc104492407be610cfd24a666cbfee388ce9d60b4354fff5e559a"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.548917 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b59zw" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.566668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.582864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.628597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.643934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" event={"ID":"3486f2d2-e6a5-44a0-b804-12f9b9fd6a27","Type":"ContainerDied","Data":"0d5732ad1582d0dc0f1a09eb172ef4f895ed8673cbf8cf85d9d7eaad2e583287"} Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.643952 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-22j4x" Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.726348 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.778444 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b59zw"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.857530 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:35 crc kubenswrapper[4705]: I0216 15:12:35.867812 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-22j4x"] Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.444944 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3486f2d2-e6a5-44a0-b804-12f9b9fd6a27" path="/var/lib/kubelet/pods/3486f2d2-e6a5-44a0-b804-12f9b9fd6a27/volumes" Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.446056 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebe5f1b-1a13-4172-8662-aeae2c43ade1" path="/var/lib/kubelet/pods/6ebe5f1b-1a13-4172-8662-aeae2c43ade1/volumes" Feb 16 15:12:36 crc kubenswrapper[4705]: I0216 15:12:36.661727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652"} Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.690436 4705 generic.go:334] "Generic (PLEG): container finished" podID="616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab" containerID="9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9" exitCode=0 Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.690517 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerDied","Data":"9e13cfdaa0e7860b7a1e850fe4dcb52caf0f6d03f39873bfe9cc711c338084e9"} Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.693530 4705 generic.go:334] "Generic (PLEG): container finished" podID="50502923-5ef9-46a9-a23d-abe8face6040" containerID="103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14" exitCode=0 Feb 16 15:12:39 crc kubenswrapper[4705]: I0216 15:12:39.693592 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerDied","Data":"103c84a7788dbadecc6c366546288c1783e2293189c562d266657674cbc9aa14"} Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.725201 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.726250 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:40 crc kubenswrapper[4705]: I0216 15:12:40.732862 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.721569 4705 generic.go:334] "Generic (PLEG): container finished" podID="be538ffa-cfea-445d-872f-1a0a68b77a50" containerID="4c915b1f90c65a4caa63253e81c4e410b1a0159bd352e907ae1ccd0cccab77c8" exitCode=0 Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.722219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerDied","Data":"4c915b1f90c65a4caa63253e81c4e410b1a0159bd352e907ae1ccd0cccab77c8"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.726802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"3367abbf0870ea65517cf5b9c106672260204eaaef10c2aad38394ac50aff67a"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.729089 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab","Type":"ContainerStarted","Data":"98f6035a3a5636fd5198ef4888309ec6d2bd09b27036a32bfa21a4009719306d"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.731692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"50502923-5ef9-46a9-a23d-abe8face6040","Type":"ContainerStarted","Data":"b3f125af6a38042cb9c2384da06d61de171663eea02070c2ed22d753c10aa053"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.733378 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8" event={"ID":"4374b7db-8c42-42e1-b2bd-c633bdd8edfd","Type":"ContainerStarted","Data":"71cc5ceacaa32910838197e021592ade6a1934e655ca603291b4135afb0575dd"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.733522 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-crbv8" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.736709 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" event={"ID":"72697fcc-cd94-4ba9-9479-cb5bd82d83ab","Type":"ContainerStarted","Data":"72bd449a03d95e84997105dc6d7b60e7eef6f7f195cb1a8094b8aa8ab7f95ed1"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.738403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"b704a3c240e305626a96fee64b859e636d024e4a1605be96661ede88460480c6"} Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.744697 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b7cd49558-h4srk" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.813994 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-9hcns" podStartSLOduration=26.646955233 podStartE2EDuration="32.813947676s" podCreationTimestamp="2026-02-16 15:12:09 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.413511637 +0000 UTC m=+1148.598488713" lastFinishedPulling="2026-02-16 15:12:40.58050408 +0000 UTC m=+1154.765481156" observedRunningTime="2026-02-16 15:12:41.76288576 +0000 UTC m=+1155.947862836" watchObservedRunningTime="2026-02-16 15:12:41.813947676 +0000 UTC m=+1155.998924752" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.832234 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-crbv8" podStartSLOduration=24.67030091 podStartE2EDuration="30.83220947s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.435198697 +0000 UTC m=+1148.620175783" lastFinishedPulling="2026-02-16 15:12:40.597107267 +0000 UTC m=+1154.782084343" observedRunningTime="2026-02-16 15:12:41.826040137 +0000 UTC m=+1156.011017223" watchObservedRunningTime="2026-02-16 15:12:41.83220947 +0000 UTC m=+1156.017186546" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.850095 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.069651888 podStartE2EDuration="38.850068443s" podCreationTimestamp="2026-02-16 15:12:03 +0000 UTC" firstStartedPulling="2026-02-16 15:12:06.44817876 +0000 UTC m=+1120.633155836" lastFinishedPulling="2026-02-16 15:12:34.228595315 +0000 UTC m=+1148.413572391" observedRunningTime="2026-02-16 15:12:41.843242891 +0000 UTC m=+1156.028219977" watchObservedRunningTime="2026-02-16 15:12:41.850068443 +0000 UTC m=+1156.035045519" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.941742 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.098653513 podStartE2EDuration="36.941710331s" podCreationTimestamp="2026-02-16 15:12:05 +0000 UTC" firstStartedPulling="2026-02-16 15:12:07.418092993 +0000 UTC m=+1121.603070069" lastFinishedPulling="2026-02-16 15:12:34.261149821 +0000 UTC m=+1148.446126887" observedRunningTime="2026-02-16 15:12:41.906360076 +0000 UTC m=+1156.091337172" watchObservedRunningTime="2026-02-16 15:12:41.941710331 +0000 UTC m=+1156.126687407" Feb 16 15:12:41 crc kubenswrapper[4705]: I0216 15:12:41.956143 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:12:42 crc kubenswrapper[4705]: I0216 15:12:42.753895 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} Feb 16 15:12:43 crc kubenswrapper[4705]: E0216 15:12:43.155455 4705 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.47:57328->38.102.83.47:38595: read tcp 38.102.83.47:57328->38.102.83.47:38595: read: connection reset by peer Feb 16 15:12:43 crc kubenswrapper[4705]: I0216 15:12:43.767460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d"} Feb 16 15:12:43 crc kubenswrapper[4705]: I0216 15:12:43.768926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.790531 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.798976 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"cdc54b8b8ee52f0a93f7eebca14a749c54f9f809d78cde49af28ee6f28b31e7d"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pc9sf" event={"ID":"be538ffa-cfea-445d-872f-1a0a68b77a50","Type":"ContainerStarted","Data":"90a0f7bd9d02870fea5ab26b89c7f506367d3cf65f7f8c24d1a8876c85ab1f9b"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.799850 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.808117 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1e54f9b0-7b03-46de-8c76-2a37e44a02df","Type":"ContainerStarted","Data":"45e1887f2222c622b4e31473b0ff7feaf435d309c41cdd66e82977f0411a341e"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.818669 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"54e71500-a592-4c97-86c1-4f3f6a4d1b41","Type":"ContainerStarted","Data":"82b041e18447f832e42be57b619cfa5f2d216bdb2d56a89acb9aa6d12074ef52"} Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.924642 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pc9sf" podStartSLOduration=27.608604982 podStartE2EDuration="33.924608187s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.270856564 +0000 UTC m=+1148.455833640" lastFinishedPulling="2026-02-16 15:12:40.586859769 +0000 UTC m=+1154.771836845" observedRunningTime="2026-02-16 15:12:44.868916591 +0000 UTC m=+1159.053893667" watchObservedRunningTime="2026-02-16 15:12:44.924608187 +0000 UTC m=+1159.109585263" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.954982 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.164008579 podStartE2EDuration="33.954953891s" podCreationTimestamp="2026-02-16 15:12:11 +0000 UTC" firstStartedPulling="2026-02-16 15:12:32.625883562 +0000 UTC m=+1146.810860638" lastFinishedPulling="2026-02-16 15:12:42.416828874 +0000 UTC m=+1156.601805950" observedRunningTime="2026-02-16 15:12:44.919745431 +0000 UTC m=+1159.104722507" watchObservedRunningTime="2026-02-16 15:12:44.954953891 +0000 UTC m=+1159.139930967" Feb 16 15:12:44 crc kubenswrapper[4705]: I0216 15:12:44.968084 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.066136325 podStartE2EDuration="30.96806204s" podCreationTimestamp="2026-02-16 15:12:14 +0000 UTC" firstStartedPulling="2026-02-16 15:12:33.509457926 +0000 UTC m=+1147.694435002" lastFinishedPulling="2026-02-16 15:12:42.411383641 +0000 UTC m=+1156.596360717" observedRunningTime="2026-02-16 15:12:44.954530779 +0000 UTC m=+1159.139507875" watchObservedRunningTime="2026-02-16 15:12:44.96806204 +0000 UTC m=+1159.153039116" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.197524 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.198063 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.498748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.624535 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.663887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.829502 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.867152 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:45 crc kubenswrapper[4705]: I0216 15:12:45.952223 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.143722 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.171715 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.171780 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.206478 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.208707 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.219849 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.258703 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.260526 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.263094 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.265979 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266030 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.266427 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.268594 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.316023 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.369907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.369976 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370007 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370041 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370069 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370205 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370228 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.370289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.371116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.372176 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.372483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.395227 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"dnsmasq-dns-7fd796d7df-kp5gg\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.479311 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483673 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.483627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovs-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.484808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.484965 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.485187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.486145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-ovn-rundir\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.487390 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-config\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.511442 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.517908 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6m6n\" (UniqueName: \"kubernetes.io/projected/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-kube-api-access-s6m6n\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.517993 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772-combined-ca-bundle\") pod \"ovn-controller-metrics-jbdgd\" (UID: \"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772\") " pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.522225 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.524267 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.552602 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.598268 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jbdgd" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.623898 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.649574 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.677853 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.677886 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.699861 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.804723 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.802587 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805624 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805708 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.805740 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.869209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" event={"ID":"d7dbc743-b65f-414c-adef-c3e8e158e4dc","Type":"ContainerDied","Data":"cba1b72db61c105e5863e586d645a2f7e94a83ed46db96da197a374840b783e3"} Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.869443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zdn4j" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.910308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.910868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911194 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") pod \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\" (UID: \"d7dbc743-b65f-414c-adef-c3e8e158e4dc\") " Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911576 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911769 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.911987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.912155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.913407 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.916455 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.917024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.917725 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.918214 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config" (OuterVolumeSpecName: "config") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.920056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.921428 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k" (OuterVolumeSpecName: "kube-api-access-bms9k") pod "d7dbc743-b65f-414c-adef-c3e8e158e4dc" (UID: "d7dbc743-b65f-414c-adef-c3e8e158e4dc"). InnerVolumeSpecName "kube-api-access-bms9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.942770 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:46 crc kubenswrapper[4705]: I0216 15:12:46.944198 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"dnsmasq-dns-86db49b7ff-7rdzt\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023873 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023907 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7dbc743-b65f-414c-adef-c3e8e158e4dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.023957 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms9k\" (UniqueName: \"kubernetes.io/projected/d7dbc743-b65f-414c-adef-c3e8e158e4dc-kube-api-access-bms9k\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.092184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.276341 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.294527 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zdn4j"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.315743 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jbdgd"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.327307 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.334906 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.341927 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.342244 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.342402 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.346875 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jx8cn" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.370562 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.384215 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438737 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438948 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.438975 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.439023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.439042 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541572 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.541787 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.542916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-scripts\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.543680 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca8a807-8e20-4d12-8355-09c1883163ca-config\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.545402 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.552605 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.552655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.557830 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca8a807-8e20-4d12-8355-09c1883163ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.563629 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675g6\" (UniqueName: \"kubernetes.io/projected/1ca8a807-8e20-4d12-8355-09c1883163ca-kube-api-access-675g6\") pod \"ovn-northd-0\" (UID: \"1ca8a807-8e20-4d12-8355-09c1883163ca\") " pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.616072 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.617756 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.620135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.630879 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.692195 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.694010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.710188 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.743277 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757049 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757240 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757405 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.757458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.779536 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.781150 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.794255 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.859862 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.859985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.860236 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.861678 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.862339 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.867608 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.869139 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.876242 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.883253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerStarted","Data":"d300d6e6a8e721e23b118ec6cd1d7277765e081fcd0cf727ad7a0cfd4099f2fa"} Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.883957 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"keystone-340c-account-create-update-htclx\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.884500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.886431 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jbdgd" event={"ID":"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772","Type":"ContainerStarted","Data":"2d3c1f8b23cb89332cae64e908bf8a38b99bfe8924450d91aaa1b4576a0f68f6"} Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.891589 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"keystone-db-create-zf4nh\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.965966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966075 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966204 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.966620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.967228 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.976393 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:47 crc kubenswrapper[4705]: I0216 15:12:47.984645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"placement-db-create-7hxxb\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.056941 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.091153 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.096226 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.100735 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.108275 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.122879 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"placement-78e4-account-create-update-475d7\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.253047 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.438710 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7dbc743-b65f-414c-adef-c3e8e158e4dc" path="/var/lib/kubelet/pods/d7dbc743-b65f-414c-adef-c3e8e158e4dc/volumes" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.492587 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:12:48 crc kubenswrapper[4705]: W0216 15:12:48.575564 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf9cafcc_24ed_4b80_9483_33f60d92f00f.slice/crio-4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6 WatchSource:0}: Error finding container 4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6: Status 404 returned error can't find the container with id 4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6 Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.709610 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.711796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.723056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.723227 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.730648 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.802690 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.826674 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.826834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.827749 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.829288 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.830962 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.842294 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.842882 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.852783 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"mysqld-exporter-openstack-db-create-n5lkc\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.898731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerStarted","Data":"4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6"} Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.901861 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerID="624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c" exitCode=0 Feb 16 15:12:48 crc kubenswrapper[4705]: I0216 15:12:48.903159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerDied","Data":"624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.032849 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.033419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.103406 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.136581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.136661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.137332 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.175315 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"mysqld-exporter-0063-account-create-update-4tnvs\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.235787 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.246736 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.261755 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.272859 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.314177 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.441007 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.470740 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.621286 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.751358 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.751906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.752191 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") pod \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\" (UID: \"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec\") " Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.857240 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config" (OuterVolumeSpecName: "config") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.869224 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.887641 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td" (OuterVolumeSpecName: "kube-api-access-4s7td") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "kube-api-access-4s7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.959282 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" (UID: "2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.980520 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.981734 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s7td\" (UniqueName: \"kubernetes.io/projected/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec-kube-api-access-4s7td\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.992884 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerStarted","Data":"3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995327 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" event={"ID":"2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec","Type":"ContainerDied","Data":"483f41b8e768070c0e3971042788df02650602d14770eb6fc300e60a9f3c1c36"} Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995360 4705 scope.go:117] "RemoveContainer" containerID="624a040076e9481702a4c8515e6484398440390ac5169bec50ef29cc5f828a9c" Feb 16 15:12:49 crc kubenswrapper[4705]: I0216 15:12:49.995506 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-crh45" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.008800 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jbdgd" event={"ID":"17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772","Type":"ContainerStarted","Data":"25e5df20b5ac0f419ea672a4b6835dc8eab8bdf24f46ceaaacefb9c081c9f388"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.010758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerStarted","Data":"a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.012605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"ccbae3cf8036f73dabe6b4d81802e346d096084f62a4df545bbbc7c49f750351"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.020665 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerStarted","Data":"0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.026084 4705 generic.go:334] "Generic (PLEG): container finished" podID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerID="01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6" exitCode=0 Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.026169 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.032950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerStarted","Data":"b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51"} Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.054703 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jbdgd" podStartSLOduration=4.054675722 podStartE2EDuration="4.054675722s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:50.03292198 +0000 UTC m=+1164.217899056" watchObservedRunningTime="2026-02-16 15:12:50.054675722 +0000 UTC m=+1164.239652798" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.124837 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.137137 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-crh45"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.390193 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.452069 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" path="/var/lib/kubelet/pods/2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec/volumes" Feb 16 15:12:50 crc kubenswrapper[4705]: I0216 15:12:50.518358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:12:50 crc kubenswrapper[4705]: W0216 15:12:50.542847 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda486f037_5709_4199_9f76_0cb0c995af25.slice/crio-f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96 WatchSource:0}: Error finding container f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96: Status 404 returned error can't find the container with id f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.059010 4705 generic.go:334] "Generic (PLEG): container finished" podID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerID="931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.059090 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerDied","Data":"931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.064817 4705 generic.go:334] "Generic (PLEG): container finished" podID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerID="d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.065597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerDied","Data":"d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.069125 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerStarted","Data":"f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.072345 4705 generic.go:334] "Generic (PLEG): container finished" podID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerID="8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.072476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerDied","Data":"8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.089621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerStarted","Data":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.090018 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.102868 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerStarted","Data":"688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.108259 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerID="aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.108328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.116938 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"db14762a-eebd-41a0-b107-e879fedc05f1","Type":"ContainerStarted","Data":"0cd96d2ab8811d31f81a2459e20cd49de9c11b08a9a5f74ff92a026484ef6d86"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.117705 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.121932 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.121985 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.133251 4705 generic.go:334] "Generic (PLEG): container finished" podID="b2232806-cac7-4787-839b-9bcecac93820" containerID="a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a" exitCode=0 Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.133756 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerDied","Data":"a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a"} Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.148139 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.28460356 podStartE2EDuration="43.14811692s" podCreationTimestamp="2026-02-16 15:12:08 +0000 UTC" firstStartedPulling="2026-02-16 15:12:18.821045821 +0000 UTC m=+1133.006022907" lastFinishedPulling="2026-02-16 15:12:49.684559191 +0000 UTC m=+1163.869536267" observedRunningTime="2026-02-16 15:12:51.131556044 +0000 UTC m=+1165.316533130" watchObservedRunningTime="2026-02-16 15:12:51.14811692 +0000 UTC m=+1165.333093996" Feb 16 15:12:51 crc kubenswrapper[4705]: I0216 15:12:51.221002 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.199647631 podStartE2EDuration="45.22097469s" podCreationTimestamp="2026-02-16 15:12:06 +0000 UTC" firstStartedPulling="2026-02-16 15:12:18.80431125 +0000 UTC m=+1132.989288336" lastFinishedPulling="2026-02-16 15:12:49.825638319 +0000 UTC m=+1164.010615395" observedRunningTime="2026-02-16 15:12:51.191047288 +0000 UTC m=+1165.376024384" watchObservedRunningTime="2026-02-16 15:12:51.22097469 +0000 UTC m=+1165.405951766" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.146148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerStarted","Data":"fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.146937 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.147784 4705 generic.go:334] "Generic (PLEG): container finished" podID="a486f037-5709-4199-9f76-0cb0c995af25" containerID="5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b" exitCode=0 Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.147907 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerDied","Data":"5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.150669 4705 generic.go:334] "Generic (PLEG): container finished" podID="f37b9312-710d-49b4-8cc7-3956df176627" containerID="0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765" exitCode=0 Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.150763 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerDied","Data":"0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.153954 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"1f99fd45eed6bf685ed300e7f393668468b0c7931b21bb607ffea6c3c1cb525b"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.153979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1ca8a807-8e20-4d12-8355-09c1883163ca","Type":"ContainerStarted","Data":"279ea7b10b67eb648191adbb17bb2c82178fd214ff50ae04c0dcea64bcdb5bf9"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.154444 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.156895 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerStarted","Data":"45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c"} Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.173461 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" podStartSLOduration=6.173439642 podStartE2EDuration="6.173439642s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:52.167911127 +0000 UTC m=+1166.352888223" watchObservedRunningTime="2026-02-16 15:12:52.173439642 +0000 UTC m=+1166.358416718" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.207559 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" podStartSLOduration=6.207536501 podStartE2EDuration="6.207536501s" podCreationTimestamp="2026-02-16 15:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:52.194130074 +0000 UTC m=+1166.379107150" watchObservedRunningTime="2026-02-16 15:12:52.207536501 +0000 UTC m=+1166.392513577" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.219542 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.295756624 podStartE2EDuration="5.219518168s" podCreationTimestamp="2026-02-16 15:12:47 +0000 UTC" firstStartedPulling="2026-02-16 15:12:49.527280397 +0000 UTC m=+1163.712257473" lastFinishedPulling="2026-02-16 15:12:51.451041941 +0000 UTC m=+1165.636019017" observedRunningTime="2026-02-16 15:12:52.214706793 +0000 UTC m=+1166.399683889" watchObservedRunningTime="2026-02-16 15:12:52.219518168 +0000 UTC m=+1166.404495244" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.717888 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.871246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") pod \"3f443bcd-c93f-4b89-a048-cc92f28f5854\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.871763 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") pod \"3f443bcd-c93f-4b89-a048-cc92f28f5854\" (UID: \"3f443bcd-c93f-4b89-a048-cc92f28f5854\") " Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.873220 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f443bcd-c93f-4b89-a048-cc92f28f5854" (UID: "3f443bcd-c93f-4b89-a048-cc92f28f5854"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.895455 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q" (OuterVolumeSpecName: "kube-api-access-ngs6q") pod "3f443bcd-c93f-4b89-a048-cc92f28f5854" (UID: "3f443bcd-c93f-4b89-a048-cc92f28f5854"). InnerVolumeSpecName "kube-api-access-ngs6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.979384 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f443bcd-c93f-4b89-a048-cc92f28f5854-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:52 crc kubenswrapper[4705]: I0216 15:12:52.979429 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngs6q\" (UniqueName: \"kubernetes.io/projected/3f443bcd-c93f-4b89-a048-cc92f28f5854-kube-api-access-ngs6q\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.006349 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.018013 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.033296 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") pod \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080618 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") pod \"b2232806-cac7-4787-839b-9bcecac93820\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.080664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") pod \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\" (UID: \"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.081353 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" (UID: "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.081948 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") pod \"b2232806-cac7-4787-839b-9bcecac93820\" (UID: \"b2232806-cac7-4787-839b-9bcecac93820\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.083317 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.083581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2232806-cac7-4787-839b-9bcecac93820" (UID: "b2232806-cac7-4787-839b-9bcecac93820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.087440 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj" (OuterVolumeSpecName: "kube-api-access-mjpnj") pod "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" (UID: "cace81ee-1e82-4eb9-b5fa-7837c7dc69bc"). InnerVolumeSpecName "kube-api-access-mjpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.087482 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6" (OuterVolumeSpecName: "kube-api-access-f8vl6") pod "b2232806-cac7-4787-839b-9bcecac93820" (UID: "b2232806-cac7-4787-839b-9bcecac93820"). InnerVolumeSpecName "kube-api-access-f8vl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169119 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zf4nh" event={"ID":"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca","Type":"ContainerDied","Data":"0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169179 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd816be62ee7b758436a143d7764c7aad278e11525f8b11522fd076ebb1aca6" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.169137 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zf4nh" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172317 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7hxxb" event={"ID":"3f443bcd-c93f-4b89-a048-cc92f28f5854","Type":"ContainerDied","Data":"b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172380 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b883422aff22cd30343e7806da99354205b58e98ffb974a4564c1e46d5973c51" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.172459 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7hxxb" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-340c-account-create-update-htclx" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180172 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-340c-account-create-update-htclx" event={"ID":"b2232806-cac7-4787-839b-9bcecac93820","Type":"ContainerDied","Data":"3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.180295 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdde6bf2ee1b1702f08cb70c219c91b36ee883cbd73c8d9f4661db6a85f4944" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184013 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78e4-account-create-update-475d7" event={"ID":"cace81ee-1e82-4eb9-b5fa-7837c7dc69bc","Type":"ContainerDied","Data":"a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271"} Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184050 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78e4-account-create-update-475d7" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184098 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a59235e29e44d652ce2af2bb1d572a870948acfb1d24657f2e416c6610c19271" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.184885 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.186880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") pod \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.186956 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") pod \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\" (UID: \"69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.188627 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" (UID: "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189043 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2232806-cac7-4787-839b-9bcecac93820-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189057 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189067 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjpnj\" (UniqueName: \"kubernetes.io/projected/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc-kube-api-access-mjpnj\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.189078 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8vl6\" (UniqueName: \"kubernetes.io/projected/b2232806-cac7-4787-839b-9bcecac93820-kube-api-access-f8vl6\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.193355 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n" (OuterVolumeSpecName: "kube-api-access-q669n") pod "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" (UID: "69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca"). InnerVolumeSpecName "kube-api-access-q669n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.292501 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q669n\" (UniqueName: \"kubernetes.io/projected/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca-kube-api-access-q669n\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.541653 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542360 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542400 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542431 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542455 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542462 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542474 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: E0216 15:12:53.542503 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542511 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542889 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542914 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" containerName="mariadb-database-create" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542937 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2232806-cac7-4787-839b-9bcecac93820" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0dcb80-7ee5-47b3-8c58-a07554d1c0ec" containerName="init" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.542974 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" containerName="mariadb-account-create-update" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.544057 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.547999 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.565973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.679158 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.741270 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.744214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.781842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.845985 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") pod \"f37b9312-710d-49b4-8cc7-3956df176627\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.846039 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") pod \"f37b9312-710d-49b4-8cc7-3956df176627\" (UID: \"f37b9312-710d-49b4-8cc7-3956df176627\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.846966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.847264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f37b9312-710d-49b4-8cc7-3956df176627" (UID: "f37b9312-710d-49b4-8cc7-3956df176627"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.848484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.848636 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37b9312-710d-49b4-8cc7-3956df176627-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.849253 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.852268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5" (OuterVolumeSpecName: "kube-api-access-ftvm5") pod "f37b9312-710d-49b4-8cc7-3956df176627" (UID: "f37b9312-710d-49b4-8cc7-3956df176627"). InnerVolumeSpecName "kube-api-access-ftvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.863482 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"root-account-create-update-cjnqj\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.872647 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.950415 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") pod \"a486f037-5709-4199-9f76-0cb0c995af25\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.950792 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") pod \"a486f037-5709-4199-9f76-0cb0c995af25\" (UID: \"a486f037-5709-4199-9f76-0cb0c995af25\") " Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.951520 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftvm5\" (UniqueName: \"kubernetes.io/projected/f37b9312-710d-49b4-8cc7-3956df176627-kube-api-access-ftvm5\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.952079 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a486f037-5709-4199-9f76-0cb0c995af25" (UID: "a486f037-5709-4199-9f76-0cb0c995af25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:53 crc kubenswrapper[4705]: I0216 15:12:53.954429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7" (OuterVolumeSpecName: "kube-api-access-sq9j7") pod "a486f037-5709-4199-9f76-0cb0c995af25" (UID: "a486f037-5709-4199-9f76-0cb0c995af25"). InnerVolumeSpecName "kube-api-access-sq9j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.053751 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a486f037-5709-4199-9f76-0cb0c995af25-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.054119 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9j7\" (UniqueName: \"kubernetes.io/projected/a486f037-5709-4199-9f76-0cb0c995af25-kube-api-access-sq9j7\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.200618 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.210002 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n5lkc" event={"ID":"a486f037-5709-4199-9f76-0cb0c995af25","Type":"ContainerDied","Data":"f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96"} Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.210069 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f543cf5460ba46aa6d8f4bdfe041f05f7679a910f9549235f38ee0d748799b96" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.213663 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.214084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0063-account-create-update-4tnvs" event={"ID":"f37b9312-710d-49b4-8cc7-3956df176627","Type":"ContainerDied","Data":"688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4"} Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.214160 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688b2130c66b5cedadd83f7eb71a2a00275c8148d969e68e1dce039d0f445cc4" Feb 16 15:12:54 crc kubenswrapper[4705]: I0216 15:12:54.352296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.252928 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerDied","Data":"02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788"} Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.252772 4705 generic.go:334] "Generic (PLEG): container finished" podID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerID="02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788" exitCode=0 Feb 16 15:12:55 crc kubenswrapper[4705]: I0216 15:12:55.253651 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerStarted","Data":"310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130"} Feb 16 15:12:56 crc kubenswrapper[4705]: I0216 15:12:56.589536 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:56 crc kubenswrapper[4705]: I0216 15:12:56.852481 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073093 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:12:57 crc kubenswrapper[4705]: E0216 15:12:57.073823 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073852 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: E0216 15:12:57.073872 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.073882 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.074175 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37b9312-710d-49b4-8cc7-3956df176627" containerName="mariadb-account-create-update" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.074214 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a486f037-5709-4199-9f76-0cb0c995af25" containerName="mariadb-database-create" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.075317 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.087790 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.094562 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.162351 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.209223 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.211329 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.213573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.216877 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.253435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.253867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.281198 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" containerID="cri-o://45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" gracePeriod=10 Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.356729 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357181 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357233 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.357438 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.358841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.376421 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"glance-db-create-gg5c2\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.402575 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.459531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.459610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.460560 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.478531 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"glance-a6ad-account-create-update-f24b2\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:57 crc kubenswrapper[4705]: I0216 15:12:57.533676 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.297499 4705 generic.go:334] "Generic (PLEG): container finished" podID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerID="45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" exitCode=0 Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.297570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c"} Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.877524 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.883516 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.906961 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:12:58 crc kubenswrapper[4705]: I0216 15:12:58.973119 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.001904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.001983 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002025 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002060 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.002127 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.104878 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105732 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.105930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106076 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.106952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.110886 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.113391 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.134721 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"dnsmasq-dns-698758b865-zg96k\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.226237 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.237136 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.239072 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.265076 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.323197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.324454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.427426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.427642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.428846 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.460505 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.461687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"mysqld-exporter-openstack-cell1-db-create-2xsdv\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.463865 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.466358 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.498827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.564584 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.636804 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.637022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.647946 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.738336 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") pod \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.738980 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") pod \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\" (UID: \"3edc4e5d-5b55-47b9-8aba-24b10b827f82\") " Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.741464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.742086 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.742697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3edc4e5d-5b55-47b9-8aba-24b10b827f82" (UID: "3edc4e5d-5b55-47b9-8aba-24b10b827f82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.743430 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.762951 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf" (OuterVolumeSpecName: "kube-api-access-hg7gf") pod "3edc4e5d-5b55-47b9-8aba-24b10b827f82" (UID: "3edc4e5d-5b55-47b9-8aba-24b10b827f82"). InnerVolumeSpecName "kube-api-access-hg7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.772474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"mysqld-exporter-baa1-account-create-update-4xrwg\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.846026 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edc4e5d-5b55-47b9-8aba-24b10b827f82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.846062 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7gf\" (UniqueName: \"kubernetes.io/projected/3edc4e5d-5b55-47b9-8aba-24b10b827f82-kube-api-access-hg7gf\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.905528 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:12:59 crc kubenswrapper[4705]: I0216 15:12:59.966016 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.017802 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.018872 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.018939 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.019008 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="init" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019057 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="init" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.019127 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019180 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019456 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" containerName="mariadb-account-create-update" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.019538 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" containerName="dnsmasq-dns" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.029853 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.040896 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.040996 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gs8lf" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.041217 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.044948 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049217 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049472 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.049505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") pod \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\" (UID: \"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d\") " Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.068588 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.071223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96" (OuterVolumeSpecName: "kube-api-access-npq96") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "kube-api-access-npq96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.116438 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config" (OuterVolumeSpecName: "config") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.153915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.154815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.154888 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155831 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155881 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155942 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npq96\" (UniqueName: \"kubernetes.io/projected/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-kube-api-access-npq96\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155956 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.155966 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.164961 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" (UID: "78a194a4-2cf0-46c3-b57c-4c4919e6ea1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.203288 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258302 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258433 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258629 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258672 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258745 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258781 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.258786 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.258856 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:00.758833328 +0000 UTC m=+1174.943810394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.259128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-cache\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.259656 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a1c8c609-3b8c-48d1-9731-56451bf10919-lock\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.265677 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.265710 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f656772c32ef3299954509100c551f8dec1696aec746556cecee02eefe5d595/globalmount\"" pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.269706 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8c609-3b8c-48d1-9731-56451bf10919-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.276898 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvlc\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-kube-api-access-wnvlc\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.298283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-01edcd6f-0b70-44b4-9688-2cf0dc9c96f0\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357472 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cjnqj" event={"ID":"3edc4e5d-5b55-47b9-8aba-24b10b827f82","Type":"ContainerDied","Data":"310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357518 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cjnqj" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.357527 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310ac08e44b724036b68cfeafd23a9520a9e42bc7e1946e153df3dba4f2b4130" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.367356 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.368966 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" event={"ID":"78a194a4-2cf0-46c3-b57c-4c4919e6ea1d","Type":"ContainerDied","Data":"d300d6e6a8e721e23b118ec6cd1d7277765e081fcd0cf727ad7a0cfd4099f2fa"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.368999 4705 scope.go:117] "RemoveContainer" containerID="45ed56ca91e47846b6a1dd5963efa9805b9c9932973d9b59aafffdb03ca1a45c" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.369127 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kp5gg" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.377714 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerStarted","Data":"5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8"} Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.410040 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.421120 4705 scope.go:117] "RemoveContainer" containerID="01f75b10ed3403636c6ff4d8d3dc13406165f688cf513365a0ee3449c67e9dd6" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.433644 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.433683 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.440236 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kp5gg"] Feb 16 15:13:00 crc kubenswrapper[4705]: W0216 15:13:00.550279 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8b1ad4_1803_403b_bc68_8c6ccb877b11.slice/crio-d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94 WatchSource:0}: Error finding container d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94: Status 404 returned error can't find the container with id d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94 Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.655725 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.685194 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.701008 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.701763 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707190 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707248 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.707193 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787270 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787339 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787439 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787462 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.787522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787755 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787785 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: E0216 15:13:00.787835 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:01.787818528 +0000 UTC m=+1175.972795604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.878770 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889603 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.889719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.891125 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.891754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.895891 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.896727 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.897569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.901234 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:00 crc kubenswrapper[4705]: I0216 15:13:00.911406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"swift-ring-rebalance-bkfjd\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.040556 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.390124 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a2df1c-b87d-4765-b900-e6b165802be2" containerID="e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.390177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerDied","Data":"e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394424 4705 generic.go:334] "Generic (PLEG): container finished" podID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerID="65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394474 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerDied","Data":"65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.394491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerStarted","Data":"5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401534 4705 generic.go:334] "Generic (PLEG): container finished" podID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerID="707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.401911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerStarted","Data":"d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413443 4705 generic.go:334] "Generic (PLEG): container finished" podID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerID="2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698" exitCode=0 Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerDied","Data":"2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.413638 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerStarted","Data":"6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.422841 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerStarted","Data":"55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.422915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerStarted","Data":"37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055"} Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.506142 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" podStartSLOduration=2.506114974 podStartE2EDuration="2.506114974s" podCreationTimestamp="2026-02-16 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:01.47504142 +0000 UTC m=+1175.660018496" watchObservedRunningTime="2026-02-16 15:13:01.506114974 +0000 UTC m=+1175.691092050" Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.573075 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bkfjd"] Feb 16 15:13:01 crc kubenswrapper[4705]: I0216 15:13:01.815472 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816274 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816665 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:01 crc kubenswrapper[4705]: E0216 15:13:01.816745 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:03.816726791 +0000 UTC m=+1178.001703857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.439903 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a194a4-2cf0-46c3-b57c-4c4919e6ea1d" path="/var/lib/kubelet/pods/78a194a4-2cf0-46c3-b57c-4c4919e6ea1d/volumes" Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441205 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerStarted","Data":"f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.441282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerStarted","Data":"dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.442331 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c074c5c-fae9-49f3-8139-adb92b649951" containerID="55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508" exitCode=0 Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.442556 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerDied","Data":"55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508"} Feb 16 15:13:02 crc kubenswrapper[4705]: I0216 15:13:02.465485 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-zg96k" podStartSLOduration=4.46546243 podStartE2EDuration="4.46546243s" podCreationTimestamp="2026-02-16 15:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:02.461128228 +0000 UTC m=+1176.646105334" watchObservedRunningTime="2026-02-16 15:13:02.46546243 +0000 UTC m=+1176.650439506" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.325623 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.379747 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") pod \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.379902 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") pod \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\" (UID: \"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.382625 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" (UID: "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.383974 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.390614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v" (OuterVolumeSpecName: "kube-api-access-kmn6v") pod "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" (UID: "19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7"). InnerVolumeSpecName "kube-api-access-kmn6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454513 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gg5c2" event={"ID":"19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7","Type":"ContainerDied","Data":"5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454880 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c05ddc50d2e35e4ecf7a88f36416b9a90fec34ec02a9d4b84ccb8c7c76e6af6" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.454853 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gg5c2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.456986 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6ad-account-create-update-f24b2" event={"ID":"5c5de6a8-c858-4f91-8833-e012562ee1a3","Type":"ContainerDied","Data":"6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.457026 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a130d99140b466fd7bcb2b0621ecdca894d7e137114770d3dee911480e86be0" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.458536 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459264 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" event={"ID":"45a2df1c-b87d-4765-b900-e6b165802be2","Type":"ContainerDied","Data":"5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8"} Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459333 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7336f58c339522bc73a0fe5659f35b381591982a9cc86de5f68644ba55b5d8" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.459431 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.487892 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") pod \"5c5de6a8-c858-4f91-8833-e012562ee1a3\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488111 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") pod \"5c5de6a8-c858-4f91-8833-e012562ee1a3\" (UID: \"5c5de6a8-c858-4f91-8833-e012562ee1a3\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") pod \"45a2df1c-b87d-4765-b900-e6b165802be2\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.488384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") pod \"45a2df1c-b87d-4765-b900-e6b165802be2\" (UID: \"45a2df1c-b87d-4765-b900-e6b165802be2\") " Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.489842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c5de6a8-c858-4f91-8833-e012562ee1a3" (UID: "5c5de6a8-c858-4f91-8833-e012562ee1a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.492426 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmn6v\" (UniqueName: \"kubernetes.io/projected/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7-kube-api-access-kmn6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.497214 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45a2df1c-b87d-4765-b900-e6b165802be2" (UID: "45a2df1c-b87d-4765-b900-e6b165802be2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.502934 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn" (OuterVolumeSpecName: "kube-api-access-kvpbn") pod "5c5de6a8-c858-4f91-8833-e012562ee1a3" (UID: "5c5de6a8-c858-4f91-8833-e012562ee1a3"). InnerVolumeSpecName "kube-api-access-kvpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.510593 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl" (OuterVolumeSpecName: "kube-api-access-bhxfl") pod "45a2df1c-b87d-4765-b900-e6b165802be2" (UID: "45a2df1c-b87d-4765-b900-e6b165802be2"). InnerVolumeSpecName "kube-api-access-bhxfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595708 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5de6a8-c858-4f91-8833-e012562ee1a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595759 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvpbn\" (UniqueName: \"kubernetes.io/projected/5c5de6a8-c858-4f91-8833-e012562ee1a3-kube-api-access-kvpbn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595775 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxfl\" (UniqueName: \"kubernetes.io/projected/45a2df1c-b87d-4765-b900-e6b165802be2-kube-api-access-bhxfl\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.595789 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a2df1c-b87d-4765-b900-e6b165802be2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:03 crc kubenswrapper[4705]: I0216 15:13:03.904627 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.904905 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.905420 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:03 crc kubenswrapper[4705]: E0216 15:13:03.905488 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:07.905468526 +0000 UTC m=+1182.090445602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.474580 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482644 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6ad-account-create-update-f24b2" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" event={"ID":"3c074c5c-fae9-49f3-8139-adb92b649951","Type":"ContainerDied","Data":"37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055"} Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.482860 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37130c25ccf30f81bfb898209b94760a4b3f1cb5bbdc1b8815367c32a46d2055" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.490950 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.521954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") pod \"3c074c5c-fae9-49f3-8139-adb92b649951\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.522205 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") pod \"3c074c5c-fae9-49f3-8139-adb92b649951\" (UID: \"3c074c5c-fae9-49f3-8139-adb92b649951\") " Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.523168 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c074c5c-fae9-49f3-8139-adb92b649951" (UID: "3c074c5c-fae9-49f3-8139-adb92b649951"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.523800 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c074c5c-fae9-49f3-8139-adb92b649951-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.544066 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j" (OuterVolumeSpecName: "kube-api-access-7mv6j") pod "3c074c5c-fae9-49f3-8139-adb92b649951" (UID: "3c074c5c-fae9-49f3-8139-adb92b649951"). InnerVolumeSpecName "kube-api-access-7mv6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:04 crc kubenswrapper[4705]: I0216 15:13:04.628346 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mv6j\" (UniqueName: \"kubernetes.io/projected/3c074c5c-fae9-49f3-8139-adb92b649951-kube-api-access-7mv6j\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.100330 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.113282 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cjnqj"] Feb 16 15:13:05 crc kubenswrapper[4705]: I0216 15:13:05.493167 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-baa1-account-create-update-4xrwg" Feb 16 15:13:06 crc kubenswrapper[4705]: I0216 15:13:06.443432 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edc4e5d-5b55-47b9-8aba-24b10b827f82" path="/var/lib/kubelet/pods/3edc4e5d-5b55-47b9-8aba-24b10b827f82/volumes" Feb 16 15:13:07 crc kubenswrapper[4705]: I0216 15:13:07.162671 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5cb874789d-44cjq" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" containerID="cri-o://b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" gracePeriod=15 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.358434 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399125 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399197 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399249 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399261 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399274 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399287 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.399315 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.399326 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400653 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400712 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400771 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" containerName="mariadb-account-create-update" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.400799 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" containerName="mariadb-database-create" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.402047 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.402237 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.412105 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.414843 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hkp6m" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.523934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528663 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528716 4705 generic.go:334] "Generic (PLEG): container finished" podID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerID="b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" exitCode=2 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.528961 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerDied","Data":"b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095"} Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626741 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.626765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641804 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.641976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.656005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"glance-db-sync-2kkpm\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.772084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.802693 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:07.935066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935400 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935432 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: E0216 15:13:07.935502 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:15.935478147 +0000 UTC m=+1190.120455223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.539389 4705 generic.go:334] "Generic (PLEG): container finished" podID="139788ad-b160-4139-a6af-094e33c581e5" containerID="c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652" exitCode=0 Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.539919 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652"} Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.873910 4705 patch_prober.go:28] interesting pod/console-5cb874789d-44cjq container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 16 15:13:08 crc kubenswrapper[4705]: I0216 15:13:08.874051 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5cb874789d-44cjq" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.228600 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.312843 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.313168 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" containerID="cri-o://fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" gracePeriod=10 Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.561924 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerID="fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" exitCode=0 Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.561989 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd"} Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.866024 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.867954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.872238 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.879203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.998536 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.998958 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:09 crc kubenswrapper[4705]: I0216 15:13:09.999088 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.031099 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.106344 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107109 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107270 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") pod \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\" (UID: \"cf9cafcc-24ed-4b80-9483-33f60d92f00f\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.107909 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.108469 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.153729 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd" (OuterVolumeSpecName: "kube-api-access-grvmd") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "kube-api-access-grvmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.158876 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:10 crc kubenswrapper[4705]: E0216 15:13:10.159427 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="init" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159447 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="init" Feb 16 15:13:10 crc kubenswrapper[4705]: E0216 15:13:10.159467 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159473 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.159653 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" containerName="dnsmasq-dns" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.160440 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.168345 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.168871 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.174062 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.176648 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"mysqld-exporter-0\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.203218 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.236057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.236172 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.242949 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvmd\" (UniqueName: \"kubernetes.io/projected/cf9cafcc-24ed-4b80-9483-33f60d92f00f-kube-api-access-grvmd\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.311983 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.354951 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.356107 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.356243 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.357664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358339 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358348 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358761 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config" (OuterVolumeSpecName: "console-config") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358792 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.358943 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") pod \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\" (UID: \"5ab25c9f-91f2-46f2-8abf-5004d8c114ad\") " Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.359988 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.361750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.361845 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.362947 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.363285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.363806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.379758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw" (OuterVolumeSpecName: "kube-api-access-zlmrw") pod "5ab25c9f-91f2-46f2-8abf-5004d8c114ad" (UID: "5ab25c9f-91f2-46f2-8abf-5004d8c114ad"). InnerVolumeSpecName "kube-api-access-zlmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396413 4705 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396426 4705 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396437 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396447 4705 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396458 4705 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396466 4705 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.396476 4705 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.398340 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.405454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.454925 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config" (OuterVolumeSpecName: "config") pod "cf9cafcc-24ed-4b80-9483-33f60d92f00f" (UID: "cf9cafcc-24ed-4b80-9483-33f60d92f00f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.456051 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"root-account-create-update-lz7zl\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.504161 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.508262 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512462 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512477 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmrw\" (UniqueName: \"kubernetes.io/projected/5ab25c9f-91f2-46f2-8abf-5004d8c114ad-kube-api-access-zlmrw\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.512495 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9cafcc-24ed-4b80-9483-33f60d92f00f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.521601 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.603881 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerStarted","Data":"eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.604358 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614114 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cb874789d-44cjq_5ab25c9f-91f2-46f2-8abf-5004d8c114ad/console/0.log" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614330 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb874789d-44cjq" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614576 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb874789d-44cjq" event={"ID":"5ab25c9f-91f2-46f2-8abf-5004d8c114ad","Type":"ContainerDied","Data":"2ef02b500f27905a4144d7afb7f5f45a0144521e9f481a2f7671e1a311d7ac8c"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.614631 4705 scope.go:117] "RemoveContainer" containerID="b9665d2970a8c4f5fa92be6c299171cf94ba823f0cf4cc2d207db22022558095" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.632570 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerStarted","Data":"30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.638653 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" event={"ID":"cf9cafcc-24ed-4b80-9483-33f60d92f00f","Type":"ContainerDied","Data":"4c2b9573a1dddb4e4b1bb02fe4917b62d7337ef3ddbdeb3932c87fcea91971b6"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.638795 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7rdzt" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.643163 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerStarted","Data":"8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706"} Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.655334 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.606765288 podStartE2EDuration="1m9.655311895s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.687546145 +0000 UTC m=+1118.872523221" lastFinishedPulling="2026-02-16 15:12:33.736092752 +0000 UTC m=+1147.921069828" observedRunningTime="2026-02-16 15:13:10.652730273 +0000 UTC m=+1184.837707379" watchObservedRunningTime="2026-02-16 15:13:10.655311895 +0000 UTC m=+1184.840288971" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.673492 4705 scope.go:117] "RemoveContainer" containerID="fbd2f10536c7c8de9fd23012a23722dfc54f26482b28650f111c8e0634add3bd" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.710730 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bkfjd" podStartSLOduration=2.836500226 podStartE2EDuration="10.710700823s" podCreationTimestamp="2026-02-16 15:13:00 +0000 UTC" firstStartedPulling="2026-02-16 15:13:01.642645904 +0000 UTC m=+1175.827622980" lastFinishedPulling="2026-02-16 15:13:09.516846501 +0000 UTC m=+1183.701823577" observedRunningTime="2026-02-16 15:13:10.688808598 +0000 UTC m=+1184.873785674" watchObservedRunningTime="2026-02-16 15:13:10.710700823 +0000 UTC m=+1184.895677899" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.729603 4705 scope.go:117] "RemoveContainer" containerID="aab3bf3fd9a6ac7b00f1d7f4d403634f6903e2d7b39a53d0805702ee717f2a00" Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.740792 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.758601 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7rdzt"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.769487 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:13:10 crc kubenswrapper[4705]: I0216 15:13:10.776659 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5cb874789d-44cjq"] Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.120155 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:13:11 crc kubenswrapper[4705]: W0216 15:13:11.132772 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb68c2080_dd84_406b_ba19_b4cdd136c90e.slice/crio-590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b WatchSource:0}: Error finding container 590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b: Status 404 returned error can't find the container with id 590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.260463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:13:11 crc kubenswrapper[4705]: W0216 15:13:11.274131 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683ef288_8b6e_4612_be52_d1654bd75098.slice/crio-3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69 WatchSource:0}: Error finding container 3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69: Status 404 returned error can't find the container with id 3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69 Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.668705 4705 generic.go:334] "Generic (PLEG): container finished" podID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerID="e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3" exitCode=0 Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.668819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerDied","Data":"e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.669228 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerStarted","Data":"590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.675300 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerStarted","Data":"3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69"} Feb 16 15:13:11 crc kubenswrapper[4705]: I0216 15:13:11.736495 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:11 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:11 crc kubenswrapper[4705]: > Feb 16 15:13:12 crc kubenswrapper[4705]: I0216 15:13:12.440434 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" path="/var/lib/kubelet/pods/5ab25c9f-91f2-46f2-8abf-5004d8c114ad/volumes" Feb 16 15:13:12 crc kubenswrapper[4705]: I0216 15:13:12.441741 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9cafcc-24ed-4b80-9483-33f60d92f00f" path="/var/lib/kubelet/pods/cf9cafcc-24ed-4b80-9483-33f60d92f00f/volumes" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.057963 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.214030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") pod \"b68c2080-dd84-406b-ba19-b4cdd136c90e\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.214419 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") pod \"b68c2080-dd84-406b-ba19-b4cdd136c90e\" (UID: \"b68c2080-dd84-406b-ba19-b4cdd136c90e\") " Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.215018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b68c2080-dd84-406b-ba19-b4cdd136c90e" (UID: "b68c2080-dd84-406b-ba19-b4cdd136c90e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.215592 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b68c2080-dd84-406b-ba19-b4cdd136c90e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.220839 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh" (OuterVolumeSpecName: "kube-api-access-fzzvh") pod "b68c2080-dd84-406b-ba19-b4cdd136c90e" (UID: "b68c2080-dd84-406b-ba19-b4cdd136c90e"). InnerVolumeSpecName "kube-api-access-fzzvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.318277 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzvh\" (UniqueName: \"kubernetes.io/projected/b68c2080-dd84-406b-ba19-b4cdd136c90e-kube-api-access-fzzvh\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.718818 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerStarted","Data":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lz7zl" event={"ID":"b68c2080-dd84-406b-ba19-b4cdd136c90e","Type":"ContainerDied","Data":"590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720942 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lz7zl" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.720939 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590d833e7f79640c619cdd70a6d1507d048d45083d88b430cca913cadb41bc0b" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.723941 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerStarted","Data":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.750231 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.367763689 podStartE2EDuration="1m5.750211881s" podCreationTimestamp="2026-02-16 15:12:09 +0000 UTC" firstStartedPulling="2026-02-16 15:12:34.409806483 +0000 UTC m=+1148.594783579" lastFinishedPulling="2026-02-16 15:13:13.792254695 +0000 UTC m=+1187.977231771" observedRunningTime="2026-02-16 15:13:14.750107588 +0000 UTC m=+1188.935084664" watchObservedRunningTime="2026-02-16 15:13:14.750211881 +0000 UTC m=+1188.935188947" Feb 16 15:13:14 crc kubenswrapper[4705]: I0216 15:13:14.782930 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.277171096 podStartE2EDuration="5.782894011s" podCreationTimestamp="2026-02-16 15:13:09 +0000 UTC" firstStartedPulling="2026-02-16 15:13:11.279562354 +0000 UTC m=+1185.464539430" lastFinishedPulling="2026-02-16 15:13:13.785285269 +0000 UTC m=+1187.970262345" observedRunningTime="2026-02-16 15:13:14.765881052 +0000 UTC m=+1188.950858128" watchObservedRunningTime="2026-02-16 15:13:14.782894011 +0000 UTC m=+1188.967871087" Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.737316 4705 generic.go:334] "Generic (PLEG): container finished" podID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerID="86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02" exitCode=0 Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.737449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02"} Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.744153 4705 generic.go:334] "Generic (PLEG): container finished" podID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" exitCode=0 Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.744252 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.880156 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4705]: I0216 15:13:15.958796 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959295 4705 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959390 4705 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:13:15 crc kubenswrapper[4705]: E0216 15:13:15.959497 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift podName:a1c8c609-3b8c-48d1-9731-56451bf10919 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:31.959481328 +0000 UTC m=+1206.144458404 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift") pod "swift-storage-0" (UID: "a1c8c609-3b8c-48d1-9731-56451bf10919") : configmap "swift-ring-files" not found Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.733121 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:16 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:16 crc kubenswrapper[4705]: > Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.769402 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerStarted","Data":"ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.770170 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.774174 4705 generic.go:334] "Generic (PLEG): container finished" podID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerID="663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d" exitCode=0 Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.774222 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.778229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerStarted","Data":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.779150 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.802735 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371961.05206 podStartE2EDuration="1m15.802716808s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.194567498 +0000 UTC m=+1118.379544574" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.796797701 +0000 UTC m=+1190.981774777" watchObservedRunningTime="2026-02-16 15:13:16.802716808 +0000 UTC m=+1190.987693884" Feb 16 15:13:16 crc kubenswrapper[4705]: I0216 15:13:16.840449 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371961.01436 podStartE2EDuration="1m15.840415598s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:05.095426679 +0000 UTC m=+1119.280403755" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.822767702 +0000 UTC m=+1191.007744778" watchObservedRunningTime="2026-02-16 15:13:16.840415598 +0000 UTC m=+1191.025392674" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.008991 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.014280 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pc9sf" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283061 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:17 crc kubenswrapper[4705]: E0216 15:13:17.283730 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: E0216 15:13:17.283787 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.283796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.284056 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab25c9f-91f2-46f2-8abf-5004d8c114ad" containerName="console" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.284087 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" containerName="mariadb-account-create-update" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.285020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.288866 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.318592 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411755 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.411808 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.412032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.412285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514901 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.514951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515044 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515141 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515767 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.515886 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.516509 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.517961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.518304 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.541799 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"ovn-controller-crbv8-config-z9d5l\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:17 crc kubenswrapper[4705]: I0216 15:13:17.605148 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:18 crc kubenswrapper[4705]: I0216 15:13:18.800150 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerID="30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718" exitCode=0 Feb 16 15:13:18 crc kubenswrapper[4705]: I0216 15:13:18.800253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerDied","Data":"30b1733a19ec2f0e771116151f10812bfa16ad5725d0557df1dec597eb7f8718"} Feb 16 15:13:21 crc kubenswrapper[4705]: I0216 15:13:21.723828 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crbv8" podUID="4374b7db-8c42-42e1-b2bd-c633bdd8edfd" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:13:21 crc kubenswrapper[4705]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:13:21 crc kubenswrapper[4705]: > Feb 16 15:13:23 crc kubenswrapper[4705]: I0216 15:13:23.703594 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.790489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.875857 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bkfjd" event={"ID":"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d","Type":"ContainerDied","Data":"dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b"} Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.876353 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe132517673f75467ba9259f9327d854f3707668943a8696e2a0f96d6cf192b" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.876459 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bkfjd" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.881548 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerStarted","Data":"9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f"} Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.882869 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.920631 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371953.934166 podStartE2EDuration="1m22.920610089s" podCreationTimestamp="2026-02-16 15:12:02 +0000 UTC" firstStartedPulling="2026-02-16 15:12:04.895820834 +0000 UTC m=+1119.080797910" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:24.91070224 +0000 UTC m=+1199.095679316" watchObservedRunningTime="2026-02-16 15:13:24.920610089 +0000 UTC m=+1199.105587165" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.957941 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958122 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958142 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958185 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.958260 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") pod \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\" (UID: \"f5297b85-4dcb-4e4d-8b11-fbba54b2b31d\") " Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.959112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.959147 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.964449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx" (OuterVolumeSpecName: "kube-api-access-cwncx") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "kube-api-access-cwncx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.969584 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:24 crc kubenswrapper[4705]: I0216 15:13:24.980823 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.005264 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts" (OuterVolumeSpecName: "scripts") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.005387 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.010534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" (UID: "f5297b85-4dcb-4e4d-8b11-fbba54b2b31d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061452 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061501 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061516 4705 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061529 4705 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061545 4705 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061555 4705 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.061571 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwncx\" (UniqueName: \"kubernetes.io/projected/f5297b85-4dcb-4e4d-8b11-fbba54b2b31d-kube-api-access-cwncx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.880991 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.887053 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894317 4705 generic.go:334] "Generic (PLEG): container finished" podID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerID="6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e" exitCode=0 Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894445 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerDied","Data":"6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.894506 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerStarted","Data":"2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.896675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerStarted","Data":"2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4"} Feb 16 15:13:25 crc kubenswrapper[4705]: I0216 15:13:25.937049 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2kkpm" podStartSLOduration=4.931009329 podStartE2EDuration="18.937029399s" podCreationTimestamp="2026-02-16 15:13:07 +0000 UTC" firstStartedPulling="2026-02-16 15:13:10.55023855 +0000 UTC m=+1184.735215626" lastFinishedPulling="2026-02-16 15:13:24.55625862 +0000 UTC m=+1198.741235696" observedRunningTime="2026-02-16 15:13:25.930125695 +0000 UTC m=+1200.115102771" watchObservedRunningTime="2026-02-16 15:13:25.937029399 +0000 UTC m=+1200.122006475" Feb 16 15:13:26 crc kubenswrapper[4705]: I0216 15:13:26.720745 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-crbv8" Feb 16 15:13:26 crc kubenswrapper[4705]: I0216 15:13:26.909988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.476601 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.533880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.533967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run" (OuterVolumeSpecName: "var-run") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534047 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534146 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534343 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.534536 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") pod \"cecdccc6-64fe-465b-a99e-bd27376c7e32\" (UID: \"cecdccc6-64fe-465b-a99e-bd27376c7e32\") " Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.535281 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.535976 4705 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536004 4705 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536016 4705 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536029 4705 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cecdccc6-64fe-465b-a99e-bd27376c7e32-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.536273 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts" (OuterVolumeSpecName: "scripts") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.566390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q" (OuterVolumeSpecName: "kube-api-access-n7w4q") pod "cecdccc6-64fe-465b-a99e-bd27376c7e32" (UID: "cecdccc6-64fe-465b-a99e-bd27376c7e32"). InnerVolumeSpecName "kube-api-access-n7w4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.638309 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7w4q\" (UniqueName: \"kubernetes.io/projected/cecdccc6-64fe-465b-a99e-bd27376c7e32-kube-api-access-n7w4q\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.638855 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cecdccc6-64fe-465b-a99e-bd27376c7e32-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917483 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-z9d5l" Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917480 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-z9d5l" event={"ID":"cecdccc6-64fe-465b-a99e-bd27376c7e32","Type":"ContainerDied","Data":"2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a"} Feb 16 15:13:27 crc kubenswrapper[4705]: I0216 15:13:27.917557 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dec16247089b7227649b2dae3d9bd5708efe76e9e7d81b6ef14b7beed9b007a" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.587859 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.597387 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crbv8-config-z9d5l"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.721424 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:28 crc kubenswrapper[4705]: E0216 15:13:28.722053 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722074 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: E0216 15:13:28.722093 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722102 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722347 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5297b85-4dcb-4e4d-8b11-fbba54b2b31d" containerName="swift-ring-rebalance" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.722398 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" containerName="ovn-config" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.723235 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.725708 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.745406 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.768571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769051 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769512 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.769553 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.871710 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872556 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872620 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.872776 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873186 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.873535 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.875066 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:28 crc kubenswrapper[4705]: I0216 15:13:28.894629 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"ovn-controller-crbv8-config-fwr69\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.041541 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.554721 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:29 crc kubenswrapper[4705]: W0216 15:13:29.560945 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod397a0852_4076_4e11_bf86_af0ec6b81028.slice/crio-8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c WatchSource:0}: Error finding container 8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c: Status 404 returned error can't find the container with id 8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.795868 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796765 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" containerID="cri-o://3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796841 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" containerID="cri-o://a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.796991 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" containerID="cri-o://a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" gracePeriod=600 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947517 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" exitCode=0 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947559 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" exitCode=0 Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947593 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.947683 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.951177 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerStarted","Data":"eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.951231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerStarted","Data":"8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c"} Feb 16 15:13:29 crc kubenswrapper[4705]: I0216 15:13:29.996108 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-crbv8-config-fwr69" podStartSLOduration=1.996084958 podStartE2EDuration="1.996084958s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:29.983042041 +0000 UTC m=+1204.168019107" watchObservedRunningTime="2026-02-16 15:13:29.996084958 +0000 UTC m=+1204.181062034" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.431928 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecdccc6-64fe-465b-a99e-bd27376c7e32" path="/var/lib/kubelet/pods/cecdccc6-64fe-465b-a99e-bd27376c7e32/volumes" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.842258 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.925468 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.925917 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926033 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926487 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926540 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926619 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926661 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926703 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") pod \"761a74d6-061c-47dd-b376-b6d6a1906382\" (UID: \"761a74d6-061c-47dd-b376-b6d6a1906382\") " Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926887 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.926942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927273 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927791 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927821 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.927840 4705 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/761a74d6-061c-47dd-b376-b6d6a1906382-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.934276 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.936872 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx" (OuterVolumeSpecName: "kube-api-access-87msx") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "kube-api-access-87msx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.943114 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.943168 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config" (OuterVolumeSpecName: "config") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.949672 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out" (OuterVolumeSpecName: "config-out") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.973493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config" (OuterVolumeSpecName: "web-config") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.974106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "761a74d6-061c-47dd-b376-b6d6a1906382" (UID: "761a74d6-061c-47dd-b376-b6d6a1906382"). InnerVolumeSpecName "pvc-d7cf3552-166c-4b95-888b-d04078abb8ed". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979111 4705 generic.go:334] "Generic (PLEG): container finished" podID="761a74d6-061c-47dd-b376-b6d6a1906382" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" exitCode=0 Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979212 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"761a74d6-061c-47dd-b376-b6d6a1906382","Type":"ContainerDied","Data":"0527469390d6fe2114a9d14988dc215c1fbcef5ab135d077a80b8055e2b4b3bf"} Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.979389 4705 scope.go:117] "RemoveContainer" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.982817 4705 generic.go:334] "Generic (PLEG): container finished" podID="397a0852-4076-4e11-bf86-af0ec6b81028" containerID="eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713" exitCode=0 Feb 16 15:13:30 crc kubenswrapper[4705]: I0216 15:13:30.982879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crbv8-config-fwr69" event={"ID":"397a0852-4076-4e11-bf86-af0ec6b81028","Type":"ContainerDied","Data":"eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713"} Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.032663 4705 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/761a74d6-061c-47dd-b376-b6d6a1906382-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033143 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87msx\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-kube-api-access-87msx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033166 4705 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/761a74d6-061c-47dd-b376-b6d6a1906382-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033183 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033201 4705 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033217 4705 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/761a74d6-061c-47dd-b376-b6d6a1906382-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.033275 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") on node \"crc\" " Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.040770 4705 scope.go:117] "RemoveContainer" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.128846 4705 scope.go:117] "RemoveContainer" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134084 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134267 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d7cf3552-166c-4b95-888b-d04078abb8ed" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed") on node "crc" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.134314 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.139677 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.166423 4705 scope.go:117] "RemoveContainer" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.167659 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.184515 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185530 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185580 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185613 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185651 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185694 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="init-config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185728 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="init-config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.185752 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.185760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="prometheus" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186219 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="config-reloader" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.186238 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" containerName="thanos-sidecar" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.193563 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.196013 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bs5tf" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.196577 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197189 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197311 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.197431 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.199897 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.200077 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.200520 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.204491 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.206637 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.230949 4705 scope.go:117] "RemoveContainer" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.231714 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": container with ID starting with 3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76 not found: ID does not exist" containerID="3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.231756 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76"} err="failed to get container status \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": rpc error: code = NotFound desc = could not find container \"3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76\": container with ID starting with 3a1b5d902fb74e50b596ebbef1a3c5d2083a571f97b268d4e4a58228ac3aec76 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.231788 4705 scope.go:117] "RemoveContainer" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.233580 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": container with ID starting with a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e not found: ID does not exist" containerID="a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.233607 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e"} err="failed to get container status \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": rpc error: code = NotFound desc = could not find container \"a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e\": container with ID starting with a9cb32d17df0f43e3cff11a43cf3cff85d645c6970789ca5a5fbc92d29208b0e not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.233622 4705 scope.go:117] "RemoveContainer" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.234709 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": container with ID starting with a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8 not found: ID does not exist" containerID="a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.234735 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8"} err="failed to get container status \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": rpc error: code = NotFound desc = could not find container \"a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8\": container with ID starting with a2977f14cdcf41486a49efaa7ce37a1510ecc974fed5de33f1992c931d01bcd8 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.234750 4705 scope.go:117] "RemoveContainer" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: E0216 15:13:31.236646 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": container with ID starting with f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902 not found: ID does not exist" containerID="f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.236670 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902"} err="failed to get container status \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": rpc error: code = NotFound desc = could not find container \"f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902\": container with ID starting with f51205ee0f05fd9b6dcb53234c2e7b7fa7e21e6afdba49579930404a7a2b4902 not found: ID does not exist" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345031 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345164 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345400 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345463 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345532 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345638 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345685 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345711 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345729 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.345749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.448013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449824 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.449959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450079 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450330 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450579 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.450942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451114 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451383 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.451766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.452042 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ed43376-64ee-4fa7-9e24-00d85997e8c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.453624 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.453658 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/88c6cd7cb604a645ab31c0e76d113b8c44ff69d3e39fcb5b354218108db12562/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.459697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461491 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461663 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.461974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.462126 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.463004 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.464548 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.469235 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed43376-64ee-4fa7-9e24-00d85997e8c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.477603 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9q74\" (UniqueName: \"kubernetes.io/projected/0ed43376-64ee-4fa7-9e24-00d85997e8c1-kube-api-access-k9q74\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.506333 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7cf3552-166c-4b95-888b-d04078abb8ed\") pod \"prometheus-metric-storage-0\" (UID: \"0ed43376-64ee-4fa7-9e24-00d85997e8c1\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4705]: I0216 15:13:31.563174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.002321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.032562 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a1c8c609-3b8c-48d1-9731-56451bf10919-etc-swift\") pod \"swift-storage-0\" (UID: \"a1c8c609-3b8c-48d1-9731-56451bf10919\") " pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.167078 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.362934 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:32 crc kubenswrapper[4705]: W0216 15:13:32.375911 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ed43376_64ee_4fa7_9e24_00d85997e8c1.slice/crio-8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5 WatchSource:0}: Error finding container 8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5: Status 404 returned error can't find the container with id 8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5 Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.438620 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761a74d6-061c-47dd-b376-b6d6a1906382" path="/var/lib/kubelet/pods/761a74d6-061c-47dd-b376-b6d6a1906382/volumes" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.454684 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519454 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519561 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519648 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519964 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") pod \"397a0852-4076-4e11-bf86-af0ec6b81028\" (UID: \"397a0852-4076-4e11-bf86-af0ec6b81028\") " Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.519983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.520049 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run" (OuterVolumeSpecName: "var-run") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.520266 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521013 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521347 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts" (OuterVolumeSpecName: "scripts") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521579 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521601 4705 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521613 4705 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521623 4705 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/397a0852-4076-4e11-bf86-af0ec6b81028-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.521635 4705 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/397a0852-4076-4e11-bf86-af0ec6b81028-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.525997 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq" (OuterVolumeSpecName: "kube-api-access-hlfmq") pod "397a0852-4076-4e11-bf86-af0ec6b81028" (UID: "397a0852-4076-4e11-bf86-af0ec6b81028"). InnerVolumeSpecName "kube-api-access-hlfmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.623431 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfmq\" (UniqueName: \"kubernetes.io/projected/397a0852-4076-4e11-bf86-af0ec6b81028-kube-api-access-hlfmq\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.651989 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.661930 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crbv8-config-fwr69"] Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.841226 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:13:32 crc kubenswrapper[4705]: W0216 15:13:32.842353 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c8c609_3b8c_48d1_9731_56451bf10919.slice/crio-b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b WatchSource:0}: Error finding container b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b: Status 404 returned error can't find the container with id b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b Feb 16 15:13:32 crc kubenswrapper[4705]: I0216 15:13:32.846133 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.030481 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b22553fa991b06503aeed9484cee162d4916a2332c3b1181f88049c64f43457b"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.033410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crbv8-config-fwr69" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.033437 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8382bebf6e75f6deb153e0d5b999a37c7593976c7722e1337e8e0044ed55aa3c" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.035936 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"8d3b500d206aa80ad662bf2d4ab0b4910c0c6fcc99b2cb002f6a6f07244456b5"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.039321 4705 generic.go:334] "Generic (PLEG): container finished" podID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerID="2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4" exitCode=0 Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.039389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerDied","Data":"2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4"} Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.340561 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:13:33 crc kubenswrapper[4705]: I0216 15:13:33.735400 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.162055 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:34 crc kubenswrapper[4705]: E0216 15:13:34.162834 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.162973 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.163261 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" containerName="ovn-config" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.164175 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.176500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.276234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.276401 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.378790 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.378923 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.379351 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.380320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.381686 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.387434 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.400267 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"heat-db-create-mdv7p\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.409129 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.459512 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397a0852-4076-4e11-bf86-af0ec6b81028" path="/var/lib/kubelet/pods/397a0852-4076-4e11-bf86-af0ec6b81028/volumes" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.476028 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.481185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.481282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.504497 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.506382 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.531507 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.550807 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.554614 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.559132 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.562566 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.575643 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.577222 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.588597 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.589900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.591102 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.591180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.599960 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.609312 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.610808 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.620833 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621176 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621440 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.621607 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.637225 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.643755 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"heat-56f8-account-create-update-kbzxq\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.673082 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.676897 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.680497 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.688474 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.692185 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.693944 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694016 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694167 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.694287 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.695075 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.706211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.715944 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"cinder-ea32-account-create-update-7qwh2\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.723923 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"cinder-db-create-fpgrj\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.724714 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.737215 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.810980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813343 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813564 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813656 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.813827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.814350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.822889 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.826673 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.829329 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.855151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.877915 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.879541 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"barbican-db-create-tr9gx\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.886298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"keystone-db-sync-gmlkp\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.896732 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.898881 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.910951 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.914002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919259 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.919390 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.922088 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.934088 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.964486 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:34 crc kubenswrapper[4705]: I0216 15:13:34.965682 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"neutron-db-create-lqlft\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.013360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027543 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027717 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.027903 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.028931 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.029996 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.057504 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"neutron-3bfb-account-create-update-r5cz9\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.062845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"barbican-fb6f-account-create-update-sg7lm\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.070934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"669cacaec90ca1a7f976320f2337fcae6fc3da525203f5c5f902617c048d5a8c"} Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.070978 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6b3ca43012f6a386bbd086c8a82f4fc946ab57411e76aea8b5dd567353cc5cb3"} Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.269083 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.352816 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.425120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.536195 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.622673 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:13:35 crc kubenswrapper[4705]: I0216 15:13:35.928897 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.037927 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104ec45d_e95d_40c0_80a8_d59de9e2d45a.slice/crio-8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520 WatchSource:0}: Error finding container 8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520: Status 404 returned error can't find the container with id 8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.092454 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerStarted","Data":"32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.098832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerStarted","Data":"8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.110438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerStarted","Data":"942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.112780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerStarted","Data":"80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.115215 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2kkpm" event={"ID":"1eba064a-3f7c-4395-beca-1b77b85e1a29","Type":"ContainerDied","Data":"8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706"} Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.115244 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5b1c2dc379b87aa6c47ebc3d629748ed51bf65dca39e43eb06d0a9ecab4706" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.153581 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.169560 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.231632 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00962490_7e63_4ba2_95e5_d95167d392bd.slice/crio-2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402 WatchSource:0}: Error finding container 2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402: Status 404 returned error can't find the container with id 2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.235902 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.246828 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:36 crc kubenswrapper[4705]: W0216 15:13:36.256313 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0216c47c_a1cb_48d7_a1cd_96bc1e7726b5.slice/crio-8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692 WatchSource:0}: Error finding container 8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692: Status 404 returned error can't find the container with id 8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692 Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.304873 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.324071 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392128 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392247 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.392554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") pod \"1eba064a-3f7c-4395-beca-1b77b85e1a29\" (UID: \"1eba064a-3f7c-4395-beca-1b77b85e1a29\") " Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.400001 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.403660 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs" (OuterVolumeSpecName: "kube-api-access-tchhs") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "kube-api-access-tchhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.444871 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.499111 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.499174 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tchhs\" (UniqueName: \"kubernetes.io/projected/1eba064a-3f7c-4395-beca-1b77b85e1a29-kube-api-access-tchhs\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.500101 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.552363 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data" (OuterVolumeSpecName: "config-data") pod "1eba064a-3f7c-4395-beca-1b77b85e1a29" (UID: "1eba064a-3f7c-4395-beca-1b77b85e1a29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4705]: I0216 15:13:36.606800 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eba064a-3f7c-4395-beca-1b77b85e1a29-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4705]: E0216 15:13:36.962083 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00962490_7e63_4ba2_95e5_d95167d392bd.slice/crio-ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.135356 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerID="018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.135581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerDied","Data":"018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.157749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerStarted","Data":"264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.157804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerStarted","Data":"86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.161206 4705 generic.go:334] "Generic (PLEG): container finished" podID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerID="15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.161254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerDied","Data":"15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.164426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.168984 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerStarted","Data":"80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.189553 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3bfb-account-create-update-r5cz9" podStartSLOduration=3.189526705 podStartE2EDuration="3.189526705s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.175248263 +0000 UTC m=+1211.360225339" watchObservedRunningTime="2026-02-16 15:13:37.189526705 +0000 UTC m=+1211.374503781" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194617 4705 generic.go:334] "Generic (PLEG): container finished" podID="00962490-7e63-4ba2-95e5-d95167d392bd" containerID="ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194710 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerDied","Data":"ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.194759 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerStarted","Data":"2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.197697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerStarted","Data":"9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.197725 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerStarted","Data":"8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.212950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b77cf8fdb4c8919cbaa8f245ebefdf2f966303f558bcfa5fe069e5521b1f4e51"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.213000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"27b52bacba22afeb30b60230c4c94ce40477695471eb40296b023c30ef071902"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.226405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerStarted","Data":"be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.231769 4705 generic.go:334] "Generic (PLEG): container finished" podID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerID="3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36" exitCode=0 Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.231845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerDied","Data":"3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238399 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerStarted","Data":"bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238446 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerStarted","Data":"d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38"} Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.238473 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2kkpm" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.307390 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lqlft" podStartSLOduration=3.307340069 podStartE2EDuration="3.307340069s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.267475217 +0000 UTC m=+1211.452452293" watchObservedRunningTime="2026-02-16 15:13:37.307340069 +0000 UTC m=+1211.492317145" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.370079 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ea32-account-create-update-7qwh2" podStartSLOduration=3.370055143 podStartE2EDuration="3.370055143s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.296066942 +0000 UTC m=+1211.481044038" watchObservedRunningTime="2026-02-16 15:13:37.370055143 +0000 UTC m=+1211.555032219" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.379006 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-fb6f-account-create-update-sg7lm" podStartSLOduration=3.378985114 podStartE2EDuration="3.378985114s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:37.318802551 +0000 UTC m=+1211.503779637" watchObservedRunningTime="2026-02-16 15:13:37.378985114 +0000 UTC m=+1211.563962190" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.634527 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:37 crc kubenswrapper[4705]: E0216 15:13:37.636720 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.636892 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.637390 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" containerName="glance-db-sync" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.639151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.696475 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.744757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.745974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.746431 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849321 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849408 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849488 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.849531 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.852702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.852966 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.853248 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.853261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.898006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"dnsmasq-dns-5b946c75cc-pmrvk\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:37 crc kubenswrapper[4705]: I0216 15:13:37.976233 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.252340 4705 generic.go:334] "Generic (PLEG): container finished" podID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerID="9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.252774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerDied","Data":"9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.254159 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerID="264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.254256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerDied","Data":"264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.256876 4705 generic.go:334] "Generic (PLEG): container finished" podID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerID="be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.257088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerDied","Data":"be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.261280 4705 generic.go:334] "Generic (PLEG): container finished" podID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerID="bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.261453 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerDied","Data":"bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe"} Feb 16 15:13:38 crc kubenswrapper[4705]: I0216 15:13:38.512873 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:13:38 crc kubenswrapper[4705]: W0216 15:13:38.516260 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1826cbb_e404_4385_8af6_36eab56118fb.slice/crio-ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221 WatchSource:0}: Error finding container ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221: Status 404 returned error can't find the container with id ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221 Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288357 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1826cbb-e404-4385-8af6-36eab56118fb" containerID="24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f" exitCode=0 Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288487 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f"} Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.288915 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerStarted","Data":"ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221"} Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.586040 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.724654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") pod \"00962490-7e63-4ba2-95e5-d95167d392bd\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.725311 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") pod \"00962490-7e63-4ba2-95e5-d95167d392bd\" (UID: \"00962490-7e63-4ba2-95e5-d95167d392bd\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.726191 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00962490-7e63-4ba2-95e5-d95167d392bd" (UID: "00962490-7e63-4ba2-95e5-d95167d392bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.726582 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00962490-7e63-4ba2-95e5-d95167d392bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.750475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp" (OuterVolumeSpecName: "kube-api-access-mc6sp") pod "00962490-7e63-4ba2-95e5-d95167d392bd" (UID: "00962490-7e63-4ba2-95e5-d95167d392bd"). InnerVolumeSpecName "kube-api-access-mc6sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.751152 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.832952 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc6sp\" (UniqueName: \"kubernetes.io/projected/00962490-7e63-4ba2-95e5-d95167d392bd-kube-api-access-mc6sp\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.897593 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.911932 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934161 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") pod \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934412 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") pod \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\" (UID: \"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f\") " Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.934983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" (UID: "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.936625 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.958503 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:39 crc kubenswrapper[4705]: I0216 15:13:39.974062 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn" (OuterVolumeSpecName: "kube-api-access-2bsgn") pod "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" (UID: "6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f"). InnerVolumeSpecName "kube-api-access-2bsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.039615 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") pod \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") pod \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040348 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") pod \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\" (UID: \"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040673 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") pod \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.040924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") pod \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\" (UID: \"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" (UID: "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041272 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") pod \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\" (UID: \"ae5e7e5c-9868-457d-872b-ec1d3f34449a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.041409 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" (UID: "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.042535 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae5e7e5c-9868-457d-872b-ec1d3f34449a" (UID: "ae5e7e5c-9868-457d-872b-ec1d3f34449a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.043846 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bsgn\" (UniqueName: \"kubernetes.io/projected/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f-kube-api-access-2bsgn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.044022 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.044483 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae5e7e5c-9868-457d-872b-ec1d3f34449a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.045002 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.052657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh" (OuterVolumeSpecName: "kube-api-access-5wbjh") pod "ae5e7e5c-9868-457d-872b-ec1d3f34449a" (UID: "ae5e7e5c-9868-457d-872b-ec1d3f34449a"). InnerVolumeSpecName "kube-api-access-5wbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.052844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx" (OuterVolumeSpecName: "kube-api-access-xm6nx") pod "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" (UID: "0216c47c-a1cb-48d7-a1cd-96bc1e7726b5"). InnerVolumeSpecName "kube-api-access-xm6nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.054505 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz" (OuterVolumeSpecName: "kube-api-access-rwlqz") pod "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" (UID: "cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d"). InnerVolumeSpecName "kube-api-access-rwlqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.076347 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.082999 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.098181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151469 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm6nx\" (UniqueName: \"kubernetes.io/projected/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5-kube-api-access-xm6nx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151837 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbjh\" (UniqueName: \"kubernetes.io/projected/ae5e7e5c-9868-457d-872b-ec1d3f34449a-kube-api-access-5wbjh\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.151851 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlqz\" (UniqueName: \"kubernetes.io/projected/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d-kube-api-access-rwlqz\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253188 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") pod \"601c1c55-db3a-443a-bd6b-7d76e884697c\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253281 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") pod \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") pod \"f5b60553-5a29-4222-ad99-2f33cedd3879\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") pod \"f5b60553-5a29-4222-ad99-2f33cedd3879\" (UID: \"f5b60553-5a29-4222-ad99-2f33cedd3879\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253523 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") pod \"601c1c55-db3a-443a-bd6b-7d76e884697c\" (UID: \"601c1c55-db3a-443a-bd6b-7d76e884697c\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.253595 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") pod \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\" (UID: \"104ec45d-e95d-40c0-80a8-d59de9e2d45a\") " Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254130 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "601c1c55-db3a-443a-bd6b-7d76e884697c" (UID: "601c1c55-db3a-443a-bd6b-7d76e884697c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "104ec45d-e95d-40c0-80a8-d59de9e2d45a" (UID: "104ec45d-e95d-40c0-80a8-d59de9e2d45a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.254414 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5b60553-5a29-4222-ad99-2f33cedd3879" (UID: "f5b60553-5a29-4222-ad99-2f33cedd3879"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.258967 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl" (OuterVolumeSpecName: "kube-api-access-f5spl") pod "104ec45d-e95d-40c0-80a8-d59de9e2d45a" (UID: "104ec45d-e95d-40c0-80a8-d59de9e2d45a"). InnerVolumeSpecName "kube-api-access-f5spl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.259400 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx" (OuterVolumeSpecName: "kube-api-access-phnmx") pod "f5b60553-5a29-4222-ad99-2f33cedd3879" (UID: "f5b60553-5a29-4222-ad99-2f33cedd3879"). InnerVolumeSpecName "kube-api-access-phnmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.261552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd" (OuterVolumeSpecName: "kube-api-access-9npnd") pod "601c1c55-db3a-443a-bd6b-7d76e884697c" (UID: "601c1c55-db3a-443a-bd6b-7d76e884697c"). InnerVolumeSpecName "kube-api-access-9npnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.304262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-56f8-account-create-update-kbzxq" event={"ID":"cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d","Type":"ContainerDied","Data":"80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.305795 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80379d8ba240dae993e748f01c0e5d89bb908dbbbcc06e414d9ec1d6cf418431" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.304667 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-56f8-account-create-update-kbzxq" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tr9gx" event={"ID":"00962490-7e63-4ba2-95e5-d95167d392bd","Type":"ContainerDied","Data":"2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309803 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2abb313d30b0a4bf470261fc82e91a17d09a1f3cfe4c0cc6540eccf197849402" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.309844 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tr9gx" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqlft" event={"ID":"0216c47c-a1cb-48d7-a1cd-96bc1e7726b5","Type":"ContainerDied","Data":"8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313496 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bccec4003cae6d97bdaa837b0b960cc405db6f695a192c6c2602c16ecda3692" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.313594 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqlft" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mdv7p" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315522 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mdv7p" event={"ID":"ae5e7e5c-9868-457d-872b-ec1d3f34449a","Type":"ContainerDied","Data":"32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.315651 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ab91c93f68da31201392a10d98f88caba3199bca15a0a94cd56707aab40d9b" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.317993 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ea32-account-create-update-7qwh2" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.318011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ea32-account-create-update-7qwh2" event={"ID":"104ec45d-e95d-40c0-80a8-d59de9e2d45a","Type":"ContainerDied","Data":"8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.318075 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cec760e7207c401fcd53b19a2338dacdfa2cd3f34d320a605785fef9fcc8520" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fb6f-account-create-update-sg7lm" event={"ID":"601c1c55-db3a-443a-bd6b-7d76e884697c","Type":"ContainerDied","Data":"d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319839 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b33278f5b5080f081d8ed65f9d08614fde4d9fadd6cd96ae2ffb1908a8ce38" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.319898 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fb6f-account-create-update-sg7lm" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.323422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6eb12f019878e65aca4af6ec05215ffb4fdac243dce661df95c4668ac3f9270d"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3bfb-account-create-update-r5cz9" event={"ID":"f5b60553-5a29-4222-ad99-2f33cedd3879","Type":"ContainerDied","Data":"86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327436 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86656d0cc5980e421a8e5acaa1ca2be74b7f4f8ab421aabeda25aec38dfdd925" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.327533 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3bfb-account-create-update-r5cz9" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.341801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerStarted","Data":"5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.342820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349400 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpgrj" event={"ID":"6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f","Type":"ContainerDied","Data":"942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b"} Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349436 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bfa4e17fe5d47469dc8682fa613e208400c069cce56e2e413cb6010902c4b" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.349495 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpgrj" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365230 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5b60553-5a29-4222-ad99-2f33cedd3879-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365494 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/601c1c55-db3a-443a-bd6b-7d76e884697c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365597 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5spl\" (UniqueName: \"kubernetes.io/projected/104ec45d-e95d-40c0-80a8-d59de9e2d45a-kube-api-access-f5spl\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365658 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9npnd\" (UniqueName: \"kubernetes.io/projected/601c1c55-db3a-443a-bd6b-7d76e884697c-kube-api-access-9npnd\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365715 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/104ec45d-e95d-40c0-80a8-d59de9e2d45a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.365843 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phnmx\" (UniqueName: \"kubernetes.io/projected/f5b60553-5a29-4222-ad99-2f33cedd3879-kube-api-access-phnmx\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:40 crc kubenswrapper[4705]: I0216 15:13:40.380291 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podStartSLOduration=3.380264758 podStartE2EDuration="3.380264758s" podCreationTimestamp="2026-02-16 15:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:40.362670373 +0000 UTC m=+1214.547647449" watchObservedRunningTime="2026-02-16 15:13:40.380264758 +0000 UTC m=+1214.565241834" Feb 16 15:13:41 crc kubenswrapper[4705]: I0216 15:13:41.371509 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"55c0bab29c8b98919f346e968526e39d942f2fe6ba8f4666849596b395ec332a"} Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.397024 4705 generic.go:334] "Generic (PLEG): container finished" podID="0ed43376-64ee-4fa7-9e24-00d85997e8c1" containerID="9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049" exitCode=0 Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.397165 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerDied","Data":"9825a109862b75e7878443427c37f65436e211e0d9a768210514e2164858b049"} Feb 16 15:13:43 crc kubenswrapper[4705]: I0216 15:13:43.756791 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.418253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"95af8624eceea65afe9b4e1dc2ea480c5f5a5096093f129be79d6604f592e37b"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerStarted","Data":"01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443289 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"b2930a5272042600ed06076da74ab456dabb5a50c8ec9bceea362fa528cf4465"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.443306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"6b633566da895b567918e8c2ae82559ba9c75d5c76f55c60b3e5f75d8633e7d5"} Feb 16 15:13:44 crc kubenswrapper[4705]: I0216 15:13:44.451942 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gmlkp" podStartSLOduration=2.905517526 podStartE2EDuration="10.451920891s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="2026-02-16 15:13:36.244238034 +0000 UTC m=+1210.429215110" lastFinishedPulling="2026-02-16 15:13:43.790641389 +0000 UTC m=+1217.975618475" observedRunningTime="2026-02-16 15:13:44.450059759 +0000 UTC m=+1218.635036835" watchObservedRunningTime="2026-02-16 15:13:44.451920891 +0000 UTC m=+1218.636897967" Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.491685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"654c94a7a0cff329fed2bdd639bfa7a86b985cf83a0ce2ddcdbdafb7bd78f5b5"} Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.495455 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"66f278accad248d50db1bff0edb1e77309684037394874bf31267967c7e4a642"} Feb 16 15:13:46 crc kubenswrapper[4705]: I0216 15:13:46.495544 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"f616ae0b1b66a1d1c75d6e06fef5b771f580c6a8c7c6f7bae1c3ceecf3195e07"} Feb 16 15:13:47 crc kubenswrapper[4705]: I0216 15:13:47.537864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"dd338201f1295e86847580333ba7e4be8606e52f0c3784fd62e242f21730cb84"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:47.979690 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.056247 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.056972 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-zg96k" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" containerID="cri-o://f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" gracePeriod=10 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.576580 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"e86082918e26262b09bf98023c01f770a38c7b4039714ada4b9cbb4796204dd8"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.579246 4705 generic.go:334] "Generic (PLEG): container finished" podID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerID="f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" exitCode=0 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.579428 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987"} Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.582391 4705 generic.go:334] "Generic (PLEG): container finished" podID="d65b4384-a678-4002-9583-7f89082af14a" containerID="01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e" exitCode=0 Feb 16 15:13:48 crc kubenswrapper[4705]: I0216 15:13:48.582466 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerDied","Data":"01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.013420 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.085835 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086096 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086159 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086256 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.086306 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") pod \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\" (UID: \"af8b1ad4-1803-403b-bc68-8c6ccb877b11\") " Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.091867 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj" (OuterVolumeSpecName: "kube-api-access-kqtrj") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "kube-api-access-kqtrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.146619 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config" (OuterVolumeSpecName: "config") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.147963 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.151021 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.165093 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af8b1ad4-1803-403b-bc68-8c6ccb877b11" (UID: "af8b1ad4-1803-403b-bc68-8c6ccb877b11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189541 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189586 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189602 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189615 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af8b1ad4-1803-403b-bc68-8c6ccb877b11-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.189632 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqtrj\" (UniqueName: \"kubernetes.io/projected/af8b1ad4-1803-403b-bc68-8c6ccb877b11-kube-api-access-kqtrj\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.599794 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"5345d2d2965fa27c8b3c6897875843cd5e66e7db0b292dfc11d468f661399df9"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.601727 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a1c8c609-3b8c-48d1-9731-56451bf10919","Type":"ContainerStarted","Data":"9a603b01c759703e43b1501dd3ebd5a7147577da2597b51c3fe4e9bb144608a6"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-zg96k" event={"ID":"af8b1ad4-1803-403b-bc68-8c6ccb877b11","Type":"ContainerDied","Data":"d96f20877dfa6c0327b80e566c47754bd3fe080f30a415bbffd8ba72ac738b94"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602432 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-zg96k" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.602463 4705 scope.go:117] "RemoveContainer" containerID="f92aa91a0bd4d4840962889a87f1afcde2ceebd9899012f0e33163043e3a2987" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.611859 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"54f55d40a2139e9694d2d9eef26202b2ed81d8cd9dab629264ea8cf4c1c1274f"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.612239 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ed43376-64ee-4fa7-9e24-00d85997e8c1","Type":"ContainerStarted","Data":"c89633df1ef1ea656b5d1ea07655513c6c01edb2957d15b0346a24143ccb478a"} Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.645989 4705 scope.go:117] "RemoveContainer" containerID="707d5db016ee71c7be05915614101d9c579374a5ac210067cf65362c8d2b2120" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.667991 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.299302492 podStartE2EDuration="51.667964635s" podCreationTimestamp="2026-02-16 15:12:58 +0000 UTC" firstStartedPulling="2026-02-16 15:13:32.84586877 +0000 UTC m=+1207.030845846" lastFinishedPulling="2026-02-16 15:13:45.214530913 +0000 UTC m=+1219.399507989" observedRunningTime="2026-02-16 15:13:49.656975406 +0000 UTC m=+1223.841952482" watchObservedRunningTime="2026-02-16 15:13:49.667964635 +0000 UTC m=+1223.852941711" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.712735 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.712713534 podStartE2EDuration="18.712713534s" podCreationTimestamp="2026-02-16 15:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:49.701796977 +0000 UTC m=+1223.886774053" watchObservedRunningTime="2026-02-16 15:13:49.712713534 +0000 UTC m=+1223.897690610" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.739734 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.750441 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-zg96k"] Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.987827 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988697 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988719 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988748 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988756 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988768 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988774 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="init" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988798 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="init" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.988843 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.988850 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989209 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989222 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989241 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989248 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989267 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989275 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989292 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989299 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: E0216 15:13:49.989326 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989334 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989533 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989548 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989560 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989574 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989588 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989599 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989607 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" containerName="dnsmasq-dns" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989615 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" containerName="mariadb-account-create-update" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.989624 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" containerName="mariadb-database-create" Feb 16 15:13:49 crc kubenswrapper[4705]: I0216 15:13:49.990796 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.001862 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.025758 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.030965 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031023 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.031268 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133613 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133666 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133716 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133758 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.133853 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135590 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135696 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.135696 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.136215 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.136574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.147337 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.158561 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"dnsmasq-dns-74f6bcbc87-dcqcn\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.240351 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.245946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.246169 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") pod \"d65b4384-a678-4002-9583-7f89082af14a\" (UID: \"d65b4384-a678-4002-9583-7f89082af14a\") " Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.250734 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv" (OuterVolumeSpecName: "kube-api-access-46kwv") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "kube-api-access-46kwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.272855 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.292258 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data" (OuterVolumeSpecName: "config-data") pod "d65b4384-a678-4002-9583-7f89082af14a" (UID: "d65b4384-a678-4002-9583-7f89082af14a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.327316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350444 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350492 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65b4384-a678-4002-9583-7f89082af14a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.350505 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46kwv\" (UniqueName: \"kubernetes.io/projected/d65b4384-a678-4002-9583-7f89082af14a-kube-api-access-46kwv\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.474334 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8b1ad4-1803-403b-bc68-8c6ccb877b11" path="/var/lib/kubelet/pods/af8b1ad4-1803-403b-bc68-8c6ccb877b11/volumes" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628064 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gmlkp" event={"ID":"d65b4384-a678-4002-9583-7f89082af14a","Type":"ContainerDied","Data":"80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558"} Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628451 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c98d65087b5806a9de73aa66d3c3e78664c260bb21df0b7b979c3c0df92558" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.628126 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gmlkp" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.882413 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.933562 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:50 crc kubenswrapper[4705]: E0216 15:13:50.934113 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.934137 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.934350 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65b4384-a678-4002-9583-7f89082af14a" containerName="keystone-db-sync" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.937722 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.952871 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953097 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953231 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953385 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.953498 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.967207 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972347 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972539 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972611 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972682 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.972800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:50 crc kubenswrapper[4705]: I0216 15:13:50.992536 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.078849 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.078927 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079112 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.079199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.087797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.090393 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.090639 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.101541 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.104956 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.105318 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.106853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.143973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"keystone-bootstrap-vj8fn\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185806 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.185882 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.186298 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.227860 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.229617 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.233406 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7v2x2" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.233553 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.281917 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290104 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290237 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290265 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290344 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.290388 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.291223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.292090 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.292625 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.293157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.293703 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.322731 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"dnsmasq-dns-847c4cc679-spn7f\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.374585 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.376345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7rvmg" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.380631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394095 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394162 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394204 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.394394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.396102 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.397754 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.403135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.403528 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.404325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.407280 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.408497 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6g79l" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.417900 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.435927 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"heat-db-sync-nz52p\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.435966 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.441996 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.466313 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.469135 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.471145 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.484984 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fhnl" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.485214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.486866 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517786 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517900 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.517999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518123 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518174 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518354 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518398 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.518530 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.531916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.535638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.546883 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.551732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"neutron-db-sync-76rfw\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.556627 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.563497 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.585807 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.590174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.633184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.636487 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.636542 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646593 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646708 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646846 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.646960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647062 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647096 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647149 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.647186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.661787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.665571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.665638 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.682534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.683420 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.705045 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716147 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xbqk5" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716485 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.716631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.725850 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.736732 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"barbican-db-sync-4vj9p\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.743679 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.750624 4705 generic.go:334] "Generic (PLEG): container finished" podID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerID="05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d" exitCode=0 Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.757682 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771224 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771394 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771439 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.771717 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.775764 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.775846 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.786146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.789854 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerDied","Data":"05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d"} Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.790066 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerStarted","Data":"030bdfe61a394a989ef6031694c0452fbb492551573dac47e20d613416b7d1f6"} Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.790220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.792016 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.802114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"cinder-db-sync-scncd\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.807592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.812919 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.818672 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"placement-db-sync-f8fxj\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.836187 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.847594 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.864221 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.867779 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.868478 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.869574 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.873297 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.873821 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875005 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875091 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875223 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875576 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.875659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.978929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979433 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979512 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979538 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979578 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979617 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979657 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979753 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979775 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.979848 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.980526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.980646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.981086 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:51 crc kubenswrapper[4705]: I0216 15:13:51.981855 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.024485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"dnsmasq-dns-785d8bcb8c-k24ln\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082309 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082346 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082404 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.082571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.083027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.083261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.087310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.087958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.088652 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.092776 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.110790 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"ceilometer-0\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.146006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.148352 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.155657 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hkp6m" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.155916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.157355 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.158878 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.176108 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.194048 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.228103 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.290207 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.292235 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.296318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298855 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.298971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299098 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.299191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.307544 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.358675 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480191 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480242 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480270 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480307 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480329 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480355 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480435 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480474 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480491 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480534 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480552 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480644 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480733 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.480762 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.481303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.494589 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.502410 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.542436 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.560695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.562502 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.562553 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.563104 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589010 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589085 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589217 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589260 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.589967 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.591278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.599618 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.605695 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.619272 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.660739 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666252 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666720 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.666744 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.670622 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.671595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.797728 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.805151 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.814322 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerStarted","Data":"a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367"} Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.843958 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.881416 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.951390 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:52 crc kubenswrapper[4705]: I0216 15:13:52.983860 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.019246 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020192 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.020251 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.021713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.021881 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.036632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w" (OuterVolumeSpecName: "kube-api-access-hhc8w") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "kube-api-access-hhc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.111546 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.120020 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config" (OuterVolumeSpecName: "config") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.121812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.125223 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.140099 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.145907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") pod \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\" (UID: \"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0\") " Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.150718 4705 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.150758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" (UID: "ea51b0ef-b1b2-4da3-8f77-fa90820c78a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151065 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151098 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151107 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhc8w\" (UniqueName: \"kubernetes.io/projected/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-kube-api-access-hhc8w\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151143 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151155 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.151163 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.166134 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.178652 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72538f80_8a9f_451f_9653_4f1faeec593c.slice/crio-fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec WatchSource:0}: Error finding container fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec: Status 404 returned error can't find the container with id fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.192269 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.210968 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod302aee2f_61be_439f_a04e_356243bb65b6.slice/crio-220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf WatchSource:0}: Error finding container 220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf: Status 404 returned error can't find the container with id 220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.356314 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.372246 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.412287 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.584407 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.593901 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:13:53 crc kubenswrapper[4705]: W0216 15:13:53.604404 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ee1b858_e5e9_4163_9fe6_e503be62c4f7.slice/crio-0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6 WatchSource:0}: Error finding container 0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6: Status 404 returned error can't find the container with id 0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6 Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.856559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerStarted","Data":"1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.875835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerStarted","Data":"fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.886112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerStarted","Data":"f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.892200 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerStarted","Data":"5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.916440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerStarted","Data":"0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.936011 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vj8fn" podStartSLOduration=3.935987281 podStartE2EDuration="3.935987281s" podCreationTimestamp="2026-02-16 15:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:53.91176312 +0000 UTC m=+1228.096740206" watchObservedRunningTime="2026-02-16 15:13:53.935987281 +0000 UTC m=+1228.120964347" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.944969 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"1f91f91f4ee1690f46dee7379d3b5f6f9664f4c57d16ad81e7ef1f99a61e9417"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.947914 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.949394 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-dcqcn" event={"ID":"ea51b0ef-b1b2-4da3-8f77-fa90820c78a0","Type":"ContainerDied","Data":"030bdfe61a394a989ef6031694c0452fbb492551573dac47e20d613416b7d1f6"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.949470 4705 scope.go:117] "RemoveContainer" containerID="05201b67128ac4f277cd627c7015b76f0d3e8ee95d995d10260beb03e997bc8d" Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.971060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerStarted","Data":"25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.973979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerStarted","Data":"220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.974158 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980829 4705 generic.go:334] "Generic (PLEG): container finished" podID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerID="4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e" exitCode=0 Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerDied","Data":"4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e"} Feb 16 15:13:53 crc kubenswrapper[4705]: I0216 15:13:53.980922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerStarted","Data":"d10563f7b1298c4c3f2217ca2ab08353ecf9c50a08e788908c9aa92642c5aac7"} Feb 16 15:13:54 crc kubenswrapper[4705]: W0216 15:13:54.116049 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e0f96f_ae09_4238_9d36_1eafc315ed7e.slice/crio-8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e WatchSource:0}: Error finding container 8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e: Status 404 returned error can't find the container with id 8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.118505 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:54 crc kubenswrapper[4705]: W0216 15:13:54.138539 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode059220a_f230_42fe_b1bf_b19be7abd7e1.slice/crio-c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3 WatchSource:0}: Error finding container c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3: Status 404 returned error can't find the container with id c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3 Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.174795 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-dcqcn"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.199812 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.473891 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" path="/var/lib/kubelet/pods/ea51b0ef-b1b2-4da3-8f77-fa90820c78a0/volumes" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.698363 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.736655 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.843742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851053 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851438 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851576 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.851925 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") pod \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\" (UID: \"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64\") " Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.894030 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj" (OuterVolumeSpecName: "kube-api-access-twbsj") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "kube-api-access-twbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.904310 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.910701 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.916562 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.961620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.961722 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config" (OuterVolumeSpecName: "config") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965909 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965937 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twbsj\" (UniqueName: \"kubernetes.io/projected/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-kube-api-access-twbsj\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965949 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965959 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.965969 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:54 crc kubenswrapper[4705]: I0216 15:13:54.977768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" (UID: "ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.071665 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.075866 4705 generic.go:334] "Generic (PLEG): container finished" podID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerID="bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3" exitCode=0 Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.075955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.083337 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.084970 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.124153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerStarted","Data":"2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.127687 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.135301 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.136979 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-spn7f" event={"ID":"ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64","Type":"ContainerDied","Data":"d10563f7b1298c4c3f2217ca2ab08353ecf9c50a08e788908c9aa92642c5aac7"} Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.137039 4705 scope.go:117] "RemoveContainer" containerID="4e0dae24c87f70b61e917d05752a749f7497d8718296ac6852500d572db0ac7e" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.159057 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-76rfw" podStartSLOduration=4.159033495 podStartE2EDuration="4.159033495s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:55.145088743 +0000 UTC m=+1229.330065809" watchObservedRunningTime="2026-02-16 15:13:55.159033495 +0000 UTC m=+1229.344010571" Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.289356 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:55 crc kubenswrapper[4705]: I0216 15:13:55.306208 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-spn7f"] Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.172765 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerStarted","Data":"029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.174024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.177275 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.181878 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1"} Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.202757 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podStartSLOduration=5.202731758 podStartE2EDuration="5.202731758s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:56.200587198 +0000 UTC m=+1230.385564274" watchObservedRunningTime="2026-02-16 15:13:56.202731758 +0000 UTC m=+1230.387708824" Feb 16 15:13:56 crc kubenswrapper[4705]: I0216 15:13:56.486920 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" path="/var/lib/kubelet/pods/ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64/volumes" Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.247552 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerStarted","Data":"c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7"} Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.247806 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" containerID="cri-o://6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.248403 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" containerID="cri-o://c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerStarted","Data":"1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0"} Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266303 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" containerID="cri-o://d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.266479 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" containerID="cri-o://1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" gracePeriod=30 Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.279282 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.279265714 podStartE2EDuration="6.279265714s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:57.276808264 +0000 UTC m=+1231.461785360" watchObservedRunningTime="2026-02-16 15:13:57.279265714 +0000 UTC m=+1231.464242790" Feb 16 15:13:57 crc kubenswrapper[4705]: I0216 15:13:57.314625 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.314604227 podStartE2EDuration="6.314604227s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:57.310200114 +0000 UTC m=+1231.495177210" watchObservedRunningTime="2026-02-16 15:13:57.314604227 +0000 UTC m=+1231.499581303" Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285131 4705 generic.go:334] "Generic (PLEG): container finished" podID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerID="c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" exitCode=0 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285550 4705 generic.go:334] "Generic (PLEG): container finished" podID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerID="6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.285674 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291765 4705 generic.go:334] "Generic (PLEG): container finished" podID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerID="1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291804 4705 generic.go:334] "Generic (PLEG): container finished" podID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerID="d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" exitCode=143 Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0"} Feb 16 15:13:58 crc kubenswrapper[4705]: I0216 15:13:58.291865 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1"} Feb 16 15:13:59 crc kubenswrapper[4705]: I0216 15:13:59.323838 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerDied","Data":"f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a"} Feb 16 15:13:59 crc kubenswrapper[4705]: I0216 15:13:59.324488 4705 generic.go:334] "Generic (PLEG): container finished" podID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerID="f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a" exitCode=0 Feb 16 15:14:01 crc kubenswrapper[4705]: I0216 15:14:01.565269 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:01 crc kubenswrapper[4705]: I0216 15:14:01.572819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.196988 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.261264 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.261659 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" containerID="cri-o://5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" gracePeriod=10 Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.371959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:14:02 crc kubenswrapper[4705]: I0216 15:14:02.977693 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 16 15:14:03 crc kubenswrapper[4705]: I0216 15:14:03.384973 4705 generic.go:334] "Generic (PLEG): container finished" podID="e1826cbb-e404-4385-8af6-36eab56118fb" containerID="5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" exitCode=0 Feb 16 15:14:03 crc kubenswrapper[4705]: I0216 15:14:03.385073 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5"} Feb 16 15:14:07 crc kubenswrapper[4705]: I0216 15:14:07.977416 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.007092 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.030222 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140781 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140858 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.140966 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141048 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141396 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141597 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141664 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141812 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141866 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141920 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") pod \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\" (UID: \"89e0f96f-ae09-4238-9d36-1eafc315ed7e\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.141996 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") pod \"404be51c-5189-4fe5-a795-3e4cf4146f9d\" (UID: \"404be51c-5189-4fe5-a795-3e4cf4146f9d\") " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.143950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs" (OuterVolumeSpecName: "logs") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.145590 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.153550 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts" (OuterVolumeSpecName: "scripts") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.161597 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj" (OuterVolumeSpecName: "kube-api-access-lnggj") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "kube-api-access-lnggj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.162510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t" (OuterVolumeSpecName: "kube-api-access-mqd8t") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "kube-api-access-mqd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.163410 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.164481 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.175995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (OuterVolumeSpecName: "glance") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.206184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts" (OuterVolumeSpecName: "scripts") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.206242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.230900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data" (OuterVolumeSpecName: "config-data") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254233 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254639 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254649 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254667 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqd8t\" (UniqueName: \"kubernetes.io/projected/89e0f96f-ae09-4238-9d36-1eafc315ed7e-kube-api-access-mqd8t\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254678 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254703 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254712 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.254728 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e0f96f-ae09-4238-9d36-1eafc315ed7e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255468 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" " Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255486 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnggj\" (UniqueName: \"kubernetes.io/projected/404be51c-5189-4fe5-a795-3e4cf4146f9d-kube-api-access-lnggj\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.255500 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.259272 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data" (OuterVolumeSpecName: "config-data") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.264753 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "89e0f96f-ae09-4238-9d36-1eafc315ed7e" (UID: "89e0f96f-ae09-4238-9d36-1eafc315ed7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.270655 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "404be51c-5189-4fe5-a795-3e4cf4146f9d" (UID: "404be51c-5189-4fe5-a795-3e4cf4146f9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.294897 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.295267 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e") on node "crc" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357297 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357339 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357353 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e0f96f-ae09-4238-9d36-1eafc315ed7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.357716 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404be51c-5189-4fe5-a795-3e4cf4146f9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e0f96f-ae09-4238-9d36-1eafc315ed7e","Type":"ContainerDied","Data":"8c363419155138f5642ec0c9bcda3f0b7abc04c50f603aabc95f66bf3d3a760e"} Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485555 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.485569 4705 scope.go:117] "RemoveContainer" containerID="c26503795b42675c203f479d16ce1032d7bdf61dae48cee8b7701d6f388c55a7" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vj8fn" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490131 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vj8fn" event={"ID":"404be51c-5189-4fe5-a795-3e4cf4146f9d","Type":"ContainerDied","Data":"a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367"} Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.490176 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a22bf069c9870ef1f56ea0f515bff169c7320c6d24545673e51b254c497e6367" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.535736 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.548425 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559017 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559676 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559698 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559717 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559727 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559777 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559786 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559802 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559810 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: E0216 15:14:10.559828 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.559834 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560078 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-log" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560102 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea51b0ef-b1b2-4da3-8f77-fa90820c78a0" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560119 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" containerName="keystone-bootstrap" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560128 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7e541d-6ea0-4e90-bd5e-aceeccc4fc64" containerName="init" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.560137 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" containerName="glance-httpd" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.561442 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.564567 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.565090 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.568183 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665675 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665802 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665883 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.665922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.666253 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.666594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769689 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769769 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769797 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.769966 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.770987 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.774953 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.775018 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776045 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776097 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.776791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.777310 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.788580 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.826336 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:10 crc kubenswrapper[4705]: I0216 15:14:10.884359 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.177773 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.189422 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vj8fn"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.259937 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.261820 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.264432 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.264815 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266172 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266172 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.266256 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.323628 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395661 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395724 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.395938 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396342 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.396558 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498691 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498764 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498798 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498890 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.498929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.499028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.506148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.506221 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.509087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.514196 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.514914 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.524161 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"keystone-bootstrap-m8mrp\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:11 crc kubenswrapper[4705]: I0216 15:14:11.591513 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:12 crc kubenswrapper[4705]: I0216 15:14:12.434334 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404be51c-5189-4fe5-a795-3e4cf4146f9d" path="/var/lib/kubelet/pods/404be51c-5189-4fe5-a795-3e4cf4146f9d/volumes" Feb 16 15:14:12 crc kubenswrapper[4705]: I0216 15:14:12.436029 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e0f96f-ae09-4238-9d36-1eafc315ed7e" path="/var/lib/kubelet/pods/89e0f96f-ae09-4238-9d36-1eafc315ed7e/volumes" Feb 16 15:14:14 crc kubenswrapper[4705]: I0216 15:14:14.588845 4705 generic.go:334] "Generic (PLEG): container finished" podID="baaef700-c962-494f-bee0-67990bf8bd84" containerID="2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337" exitCode=0 Feb 16 15:14:14 crc kubenswrapper[4705]: I0216 15:14:14.589129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerDied","Data":"2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337"} Feb 16 15:14:17 crc kubenswrapper[4705]: I0216 15:14:17.977989 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Feb 16 15:14:17 crc kubenswrapper[4705]: I0216 15:14:17.978908 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:19 crc kubenswrapper[4705]: E0216 15:14:19.586727 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 15:14:19 crc kubenswrapper[4705]: E0216 15:14:19.587294 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf5hf4h57fh576h677h5f7h664h5bfh88h67dh656h675h5f9h5bdh658hb9h69hfdh57bh59dhf7hfch5f5h7hf7h64dh57dh5ffh5ffh7bh57ch597q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4tkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b1b8bc91-daf7-4fa0-aad2-7d14527c2298): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.048953 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.049179 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nfsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4vj9p_openstack(302aee2f-61be-439f-a04e-356243bb65b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.050486 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4vj9p" podUID="302aee2f-61be-439f-a04e-356243bb65b6" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.191352 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.209517 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.274989 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298620 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.298705 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299201 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299491 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299650 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299737 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.299805 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") pod \"e059220a-f230-42fe-b1bf-b19be7abd7e1\" (UID: \"e059220a-f230-42fe-b1bf-b19be7abd7e1\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.301941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.305560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts" (OuterVolumeSpecName: "scripts") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.305825 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs" (OuterVolumeSpecName: "logs") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.307145 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85" (OuterVolumeSpecName: "kube-api-access-g6n85") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "kube-api-access-g6n85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.327743 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (OuterVolumeSpecName: "glance") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.346488 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.368893 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.377554 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data" (OuterVolumeSpecName: "config-data") pod "e059220a-f230-42fe-b1bf-b19be7abd7e1" (UID: "e059220a-f230-42fe-b1bf-b19be7abd7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.402864 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.402943 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403001 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403030 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403140 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403161 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") pod \"baaef700-c962-494f-bee0-67990bf8bd84\" (UID: \"baaef700-c962-494f-bee0-67990bf8bd84\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403346 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") pod \"e1826cbb-e404-4385-8af6-36eab56118fb\" (UID: \"e1826cbb-e404-4385-8af6-36eab56118fb\") " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403927 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403957 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" " Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403969 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e059220a-f230-42fe-b1bf-b19be7abd7e1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403979 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6n85\" (UniqueName: \"kubernetes.io/projected/e059220a-f230-42fe-b1bf-b19be7abd7e1-kube-api-access-g6n85\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403989 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.403998 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.404006 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.404015 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e059220a-f230-42fe-b1bf-b19be7abd7e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.408962 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp" (OuterVolumeSpecName: "kube-api-access-ptwqp") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "kube-api-access-ptwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.409632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z" (OuterVolumeSpecName: "kube-api-access-6kc2z") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "kube-api-access-6kc2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.439431 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.439610 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157") on node "crc" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.442768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config" (OuterVolumeSpecName: "config") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.448002 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baaef700-c962-494f-bee0-67990bf8bd84" (UID: "baaef700-c962-494f-bee0-67990bf8bd84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.465538 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.469826 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config" (OuterVolumeSpecName: "config") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.481797 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.485121 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1826cbb-e404-4385-8af6-36eab56118fb" (UID: "e1826cbb-e404-4385-8af6-36eab56118fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.506953 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.506984 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507000 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507015 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1826cbb-e404-4385-8af6-36eab56118fb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507026 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507039 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwqp\" (UniqueName: \"kubernetes.io/projected/baaef700-c962-494f-bee0-67990bf8bd84-kube-api-access-ptwqp\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507054 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507066 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baaef700-c962-494f-bee0-67990bf8bd84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.507080 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kc2z\" (UniqueName: \"kubernetes.io/projected/e1826cbb-e404-4385-8af6-36eab56118fb-kube-api-access-6kc2z\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.677297 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.677318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e059220a-f230-42fe-b1bf-b19be7abd7e1","Type":"ContainerDied","Data":"c27314f5eaf9bebf22c61c06b82d6d4f877ecaefa27400d926805c23837fc8a3"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.681523 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" event={"ID":"e1826cbb-e404-4385-8af6-36eab56118fb","Type":"ContainerDied","Data":"ef7928dfa02730fd5e116d7aa6386088db4c89a5e4b1e91534438dfa1a70e221"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.681587 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683904 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-76rfw" event={"ID":"baaef700-c962-494f-bee0-67990bf8bd84","Type":"ContainerDied","Data":"5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503"} Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683935 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-76rfw" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.683952 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c65ee7316022a6067fee6060582c1e9c9148141d1bad10ffaade19ce9d7d503" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.686599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4vj9p" podUID="302aee2f-61be-439f-a04e-356243bb65b6" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.718027 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.752254 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777114 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="init" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777813 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="init" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777828 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777836 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777854 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777861 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777874 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777881 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: E0216 15:14:20.777899 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.777905 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778161 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-httpd" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="baaef700-c962-494f-bee0-67990bf8bd84" containerName="neutron-db-sync" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778191 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" containerName="glance-log" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.778202 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.779690 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.783850 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.784309 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.789715 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.800671 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pmrvk"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.814878 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919470 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919549 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919634 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919678 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:20 crc kubenswrapper[4705]: I0216 15:14:20.919757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022728 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022805 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.022990 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023089 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023275 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.023345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.024099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.025038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.026142 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.026210 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.028669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.034900 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.039858 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.051906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.053441 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.109558 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.121831 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.464262 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.467579 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.504513 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.643954 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644079 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644295 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.644569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.723572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.726125 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.729445 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.729815 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.730073 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7rvmg" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.730249 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.746497 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.747189 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.747288 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.749205 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754675 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.754748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.755668 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.756574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.757193 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.766277 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.783839 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"dnsmasq-dns-55f844cf75-gm2hh\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.793727 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857109 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857290 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.857352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959369 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959596 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959673 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.959734 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.963769 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.965526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.966225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.967352 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:21 crc kubenswrapper[4705]: I0216 15:14:21.977944 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"neutron-77886f8dfb-96bnn\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.064655 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.201997 4705 scope.go:117] "RemoveContainer" containerID="6e81c1db2054bffae3d9c862a0e01629535de38e70cd1d7a0f338fba2a4649d2" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.222492 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.223127 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-scncd_openstack(ddb24908-6026-4fe7-81b6-345402c9398e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.224802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-scncd" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.442944 4705 scope.go:117] "RemoveContainer" containerID="1227a95614d6a93a4b573ac4c1af7638dd2c47519c707e903a1915800c021ac0" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.580477 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e059220a-f230-42fe-b1bf-b19be7abd7e1" path="/var/lib/kubelet/pods/e059220a-f230-42fe-b1bf-b19be7abd7e1/volumes" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.589039 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" path="/var/lib/kubelet/pods/e1826cbb-e404-4385-8af6-36eab56118fb/volumes" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.647640 4705 scope.go:117] "RemoveContainer" containerID="d09956a5d7d963d93ff97ec4707c643708ed37013cd33da9a9d40bb92131b3a1" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.749882 4705 scope.go:117] "RemoveContainer" containerID="5fd932773a38fe8094be9793428326865d5d26e23ac0a0bec85a97b75dc16ba5" Feb 16 15:14:22 crc kubenswrapper[4705]: E0216 15:14:22.769649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-scncd" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.817201 4705 scope.go:117] "RemoveContainer" containerID="24e97e68f945ea90afb1476172863c94c103dc49fd76b27d1442100f2e0fdb3f" Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.883251 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:22 crc kubenswrapper[4705]: W0216 15:14:22.895872 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f8a7c2_28a1_45b0_ac6a_9b6f33ac1a73.slice/crio-c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee WatchSource:0}: Error finding container c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee: Status 404 returned error can't find the container with id c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee Feb 16 15:14:22 crc kubenswrapper[4705]: I0216 15:14:22.979721 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-pmrvk" podUID="e1826cbb-e404-4385-8af6-36eab56118fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.334960 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.344800 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.448252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.529062 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:23 crc kubenswrapper[4705]: W0216 15:14:23.709482 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod736c4c77_178b_40b8_8f6f_adb8b4b1ea6d.slice/crio-18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251 WatchSource:0}: Error finding container 18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251: Status 404 returned error can't find the container with id 18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251 Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.772892 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"15fe536f2d1e7276c5b6aa9bd3efbc8aff43c887dcf49127f48384d48325f958"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.779206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerStarted","Data":"18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.792922 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"8d6f6b83879b1871c1ce4b4df4249213068c9c5c2acaf7af7da436588553b117"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.797670 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerStarted","Data":"e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.804050 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.804080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.806902 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerStarted","Data":"1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.809231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerStarted","Data":"e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55"} Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.884231 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-nz52p" podStartSLOduration=3.996979965 podStartE2EDuration="32.884205898s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.268576198 +0000 UTC m=+1227.453553274" lastFinishedPulling="2026-02-16 15:14:22.155802131 +0000 UTC m=+1256.340779207" observedRunningTime="2026-02-16 15:14:23.866634373 +0000 UTC m=+1258.051611459" watchObservedRunningTime="2026-02-16 15:14:23.884205898 +0000 UTC m=+1258.069182974" Feb 16 15:14:23 crc kubenswrapper[4705]: I0216 15:14:23.897067 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f8fxj" podStartSLOduration=6.279759635 podStartE2EDuration="32.897050169s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.420343207 +0000 UTC m=+1227.605320283" lastFinishedPulling="2026-02-16 15:14:20.037633741 +0000 UTC m=+1254.222610817" observedRunningTime="2026-02-16 15:14:23.886721219 +0000 UTC m=+1258.071698305" watchObservedRunningTime="2026-02-16 15:14:23.897050169 +0000 UTC m=+1258.082027245" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.011355 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.014266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.028864 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.029135 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.034530 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.193282 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194230 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194311 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194425 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194567 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.194632 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298548 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298752 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298795 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298856 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298905 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.298960 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.299012 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.304303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.304314 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.308689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.315134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.327009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.327448 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.353524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"neutron-75d799457-fvqj6\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.392323 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.840912 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.843000 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerStarted","Data":"99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.848322 4705 generic.go:334] "Generic (PLEG): container finished" podID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerID="cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834" exitCode=0 Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.848555 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.864034 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.880819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerStarted","Data":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.880864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.878207 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m8mrp" podStartSLOduration=13.878183181 podStartE2EDuration="13.878183181s" podCreationTimestamp="2026-02-16 15:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:24.869107316 +0000 UTC m=+1259.054084392" watchObservedRunningTime="2026-02-16 15:14:24.878183181 +0000 UTC m=+1259.063160257" Feb 16 15:14:24 crc kubenswrapper[4705]: I0216 15:14:24.928741 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77886f8dfb-96bnn" podStartSLOduration=3.928719583 podStartE2EDuration="3.928719583s" podCreationTimestamp="2026-02-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:24.892987748 +0000 UTC m=+1259.077964844" watchObservedRunningTime="2026-02-16 15:14:24.928719583 +0000 UTC m=+1259.113696659" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.273337 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:14:25 crc kubenswrapper[4705]: W0216 15:14:25.307574 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5639f9d_2d22_47cb_b481_10e88dc7f90f.slice/crio-a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b WatchSource:0}: Error finding container a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b: Status 404 returned error can't find the container with id a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.883794 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/0.log" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.884902 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" exitCode=1 Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.885197 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.886388 4705 scope.go:117] "RemoveContainer" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.895391 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.895438 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerStarted","Data":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.905345 4705 generic.go:334] "Generic (PLEG): container finished" podID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerID="e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55" exitCode=0 Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.905426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerDied","Data":"e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.924173 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerStarted","Data":"f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.924517 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.938855 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerStarted","Data":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.968612 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.968698 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b"} Feb 16 15:14:25 crc kubenswrapper[4705]: I0216 15:14:25.983031 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.983000801 podStartE2EDuration="5.983000801s" podCreationTimestamp="2026-02-16 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:25.935414414 +0000 UTC m=+1260.120391500" watchObservedRunningTime="2026-02-16 15:14:25.983000801 +0000 UTC m=+1260.167977877" Feb 16 15:14:26 crc kubenswrapper[4705]: I0216 15:14:26.062272 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.06224046 podStartE2EDuration="16.06224046s" podCreationTimestamp="2026-02-16 15:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:26.000418541 +0000 UTC m=+1260.185395617" watchObservedRunningTime="2026-02-16 15:14:26.06224046 +0000 UTC m=+1260.247217536" Feb 16 15:14:26 crc kubenswrapper[4705]: I0216 15:14:26.074289 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" podStartSLOduration=5.074245057 podStartE2EDuration="5.074245057s" podCreationTimestamp="2026-02-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:26.029749526 +0000 UTC m=+1260.214726602" watchObservedRunningTime="2026-02-16 15:14:26.074245057 +0000 UTC m=+1260.259222133" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.030192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerStarted","Data":"338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37"} Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.031149 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.038219 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.039121 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/0.log" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040089 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" exitCode=1 Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040182 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa"} Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.040539 4705 scope.go:117] "RemoveContainer" containerID="4bfdeaf9d6d45a7fcd33504e821d9bc71323329cee22917c8ace54705cdb690e" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.041263 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:27 crc kubenswrapper[4705]: E0216 15:14:27.041830 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.077392 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75d799457-fvqj6" podStartSLOduration=4.077354728 podStartE2EDuration="4.077354728s" podCreationTimestamp="2026-02-16 15:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:27.074452366 +0000 UTC m=+1261.259429442" watchObservedRunningTime="2026-02-16 15:14:27.077354728 +0000 UTC m=+1261.262331804" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.558950 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750025 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750097 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750206 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750238 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750501 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") pod \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\" (UID: \"e652b8a2-fe79-4cdc-b376-c4bc0b85197f\") " Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.750778 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs" (OuterVolumeSpecName: "logs") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.751130 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.764513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts" (OuterVolumeSpecName: "scripts") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.764900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs" (OuterVolumeSpecName: "kube-api-access-m5mzs") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "kube-api-access-m5mzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.854237 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.854284 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5mzs\" (UniqueName: \"kubernetes.io/projected/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-kube-api-access-m5mzs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.896567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.926843 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data" (OuterVolumeSpecName: "config-data") pod "e652b8a2-fe79-4cdc-b376-c4bc0b85197f" (UID: "e652b8a2-fe79-4cdc-b376-c4bc0b85197f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.957029 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:27 crc kubenswrapper[4705]: I0216 15:14:27.957083 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652b8a2-fe79-4cdc-b376-c4bc0b85197f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115230 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f8fxj" event={"ID":"e652b8a2-fe79-4cdc-b376-c4bc0b85197f","Type":"ContainerDied","Data":"1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c"} Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115292 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be0d2c6579adbd3cc2685214fa08e5f78ef226638707188ec8a446ccb1b6a4c" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.115417 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f8fxj" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.128427 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:28 crc kubenswrapper[4705]: E0216 15:14:28.129103 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.129122 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.129357 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" containerName="placement-db-sync" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.130843 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136004 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xbqk5" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136208 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.136503 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.138475 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.177762 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.190530 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.191594 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:28 crc kubenswrapper[4705]: E0216 15:14:28.192052 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268585 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268680 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268776 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.268870 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371170 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371339 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.371559 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.372027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.380738 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.381320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.382006 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.383003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.384101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.394812 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"placement-565b84d684-sh8jq\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.462080 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:28 crc kubenswrapper[4705]: I0216 15:14:28.997041 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.205738 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837"} Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.209473 4705 generic.go:334] "Generic (PLEG): container finished" podID="72538f80-8a9f-451f-9653-4f1faeec593c" containerID="e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1" exitCode=0 Feb 16 15:14:29 crc kubenswrapper[4705]: I0216 15:14:29.209507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerDied","Data":"e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1"} Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.221731 4705 generic.go:334] "Generic (PLEG): container finished" podID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerID="99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a" exitCode=0 Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.221828 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerDied","Data":"99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a"} Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.744947 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.854797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.854885 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.855113 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") pod \"72538f80-8a9f-451f-9653-4f1faeec593c\" (UID: \"72538f80-8a9f-451f-9653-4f1faeec593c\") " Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.869140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26" (OuterVolumeSpecName: "kube-api-access-ngt26") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "kube-api-access-ngt26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.885485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.887085 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.919632 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.956993 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.959121 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.959154 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngt26\" (UniqueName: \"kubernetes.io/projected/72538f80-8a9f-451f-9653-4f1faeec593c-kube-api-access-ngt26\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4705]: I0216 15:14:30.964197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.045591 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data" (OuterVolumeSpecName: "config-data") pod "72538f80-8a9f-451f-9653-4f1faeec593c" (UID: "72538f80-8a9f-451f-9653-4f1faeec593c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.061709 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72538f80-8a9f-451f-9653-4f1faeec593c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.122550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.122611 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.161614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.175148 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.242801 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz52p" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244345 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz52p" event={"ID":"72538f80-8a9f-451f-9653-4f1faeec593c","Type":"ContainerDied","Data":"fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec"} Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244426 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd507f03805d6dc10022368805c6cd8de49b57116a92503dc292d780da3431ec" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244448 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.244560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.245823 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.245893 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.800560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.887179 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:31 crc kubenswrapper[4705]: I0216 15:14:31.887641 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" containerID="cri-o://029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" gracePeriod=10 Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.194981 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: connect: connection refused" Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.264797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802"} Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.269775 4705 generic.go:334] "Generic (PLEG): container finished" podID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerID="029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" exitCode=0 Feb 16 15:14:32 crc kubenswrapper[4705]: I0216 15:14:32.269913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22"} Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280760 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280792 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280838 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.280873 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:33 crc kubenswrapper[4705]: I0216 15:14:33.951727 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053633 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053679 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053946 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.053982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.054028 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") pod \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\" (UID: \"eeee3c96-5da7-42eb-9fd9-07a5f09182d5\") " Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.059444 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts" (OuterVolumeSpecName: "scripts") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.059604 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.063284 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.094805 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs" (OuterVolumeSpecName: "kube-api-access-hmrcs") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "kube-api-access-hmrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.119592 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data" (OuterVolumeSpecName: "config-data") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.121558 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeee3c96-5da7-42eb-9fd9-07a5f09182d5" (UID: "eeee3c96-5da7-42eb-9fd9-07a5f09182d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160017 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160055 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160066 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmrcs\" (UniqueName: \"kubernetes.io/projected/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-kube-api-access-hmrcs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160075 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160090 4705 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.160099 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eeee3c96-5da7-42eb-9fd9-07a5f09182d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m8mrp" event={"ID":"eeee3c96-5da7-42eb-9fd9-07a5f09182d5","Type":"ContainerDied","Data":"1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce"} Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300338 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1645b9f5eaf15b174986ab0807fdf3998aa93d4543ba16143b96136f511e58ce" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.300491 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m8mrp" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.898240 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.899124 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.909126 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.922234 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.922354 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:34 crc kubenswrapper[4705]: I0216 15:14:34.925707 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.186588 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:35 crc kubenswrapper[4705]: E0216 15:14:35.187225 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187246 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: E0216 15:14:35.187260 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187266 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187550 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" containerName="keystone-bootstrap" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.187575 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" containerName="heat-db-sync" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.188593 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.195499 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g4ghk" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.195832 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196092 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196566 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.196727 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.197763 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.213590 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299583 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299676 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299825 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299941 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.299972 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.402440 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.402843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403008 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403122 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.403202 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.409211 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-scripts\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.409916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-combined-ca-bundle\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.410243 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-public-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.410860 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-internal-tls-certs\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.420961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-config-data\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.421035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-fernet-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.436337 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-kube-api-access-h6r2f\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.441557 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57b8117e-e668-46a4-a652-8ac2b3e5d8ff-credential-keys\") pod \"keystone-6cd49d8b6b-6gdmx\" (UID: \"57b8117e-e668-46a4-a652-8ac2b3e5d8ff\") " pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:35 crc kubenswrapper[4705]: I0216 15:14:35.534345 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.196996 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: connect: connection refused" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.760303 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793245 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793930 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.793986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") pod \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\" (UID: \"8ee1b858-e5e9-4163-9fe6-e503be62c4f7\") " Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.807042 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q" (OuterVolumeSpecName: "kube-api-access-qmx9q") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "kube-api-access-qmx9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4705]: I0216 15:14:37.897696 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmx9q\" (UniqueName: \"kubernetes.io/projected/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-kube-api-access-qmx9q\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.037542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.048080 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.059010 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.066768 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.075617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config" (OuterVolumeSpecName: "config") pod "8ee1b858-e5e9-4163-9fe6-e503be62c4f7" (UID: "8ee1b858-e5e9-4163-9fe6-e503be62c4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102582 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102622 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102633 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102646 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.102657 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1b858-e5e9-4163-9fe6-e503be62c4f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.109475 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd49d8b6b-6gdmx"] Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.361628 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" event={"ID":"8ee1b858-e5e9-4163-9fe6-e503be62c4f7","Type":"ContainerDied","Data":"0035d7237fe7be1c806d6b9257263418786b4f45d9f2c933f07ec0b5857c3db6"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.362123 4705 scope.go:117] "RemoveContainer" containerID="029b7c6144d20bc3ca36a9c94318a43ef08dc00baa20164706f419c9049b6f22" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.361718 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-k24ln" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.365797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerStarted","Data":"72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.366481 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.366540 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.369955 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerStarted","Data":"a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.376406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.379121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd49d8b6b-6gdmx" event={"ID":"57b8117e-e668-46a4-a652-8ac2b3e5d8ff","Type":"ContainerStarted","Data":"fab6c925c8353afae688e67a8410205c3190e7069e0e56260188ee57940675ff"} Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.398051 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4vj9p" podStartSLOduration=3.2440646810000002 podStartE2EDuration="47.398016456s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.255803829 +0000 UTC m=+1227.440780905" lastFinishedPulling="2026-02-16 15:14:37.409755604 +0000 UTC m=+1271.594732680" observedRunningTime="2026-02-16 15:14:38.38605622 +0000 UTC m=+1272.571033306" watchObservedRunningTime="2026-02-16 15:14:38.398016456 +0000 UTC m=+1272.582993532" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.400317 4705 scope.go:117] "RemoveContainer" containerID="bf52e5c5230ef41f1d394cd0295363d275a7ee8f615d1548a9442be8c7b9d9d3" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.437742 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-565b84d684-sh8jq" podStartSLOduration=10.437720153 podStartE2EDuration="10.437720153s" podCreationTimestamp="2026-02-16 15:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:38.424475901 +0000 UTC m=+1272.609452977" watchObservedRunningTime="2026-02-16 15:14:38.437720153 +0000 UTC m=+1272.622697229" Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.489010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:38 crc kubenswrapper[4705]: I0216 15:14:38.520475 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-k24ln"] Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.401048 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd49d8b6b-6gdmx" event={"ID":"57b8117e-e668-46a4-a652-8ac2b3e5d8ff","Type":"ContainerStarted","Data":"12f4d905e22f39c8feaca1d87dae8f0d013b97499842894440bd1a9f3a475c76"} Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.405135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerStarted","Data":"c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4"} Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.438797 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cd49d8b6b-6gdmx" podStartSLOduration=4.438772126 podStartE2EDuration="4.438772126s" podCreationTimestamp="2026-02-16 15:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:39.421087888 +0000 UTC m=+1273.606064984" watchObservedRunningTime="2026-02-16 15:14:39.438772126 +0000 UTC m=+1273.623749212" Feb 16 15:14:39 crc kubenswrapper[4705]: I0216 15:14:39.451799 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-scncd" podStartSLOduration=4.489733635 podStartE2EDuration="48.451777271s" podCreationTimestamp="2026-02-16 15:13:51 +0000 UTC" firstStartedPulling="2026-02-16 15:13:53.448793458 +0000 UTC m=+1227.633770534" lastFinishedPulling="2026-02-16 15:14:37.410837094 +0000 UTC m=+1271.595814170" observedRunningTime="2026-02-16 15:14:39.450725252 +0000 UTC m=+1273.635702348" watchObservedRunningTime="2026-02-16 15:14:39.451777271 +0000 UTC m=+1273.636754377" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.229564 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.421562 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.453902 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" path="/var/lib/kubelet/pods/8ee1b858-e5e9-4163-9fe6-e503be62c4f7/volumes" Feb 16 15:14:40 crc kubenswrapper[4705]: I0216 15:14:40.454742 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.435849 4705 generic.go:334] "Generic (PLEG): container finished" podID="302aee2f-61be-439f-a04e-356243bb65b6" containerID="a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8" exitCode=0 Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.436422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerDied","Data":"a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8"} Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.444313 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.444924 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/1.log" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445471 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" exitCode=1 Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9"} Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.445592 4705 scope.go:117] "RemoveContainer" containerID="e25720320a8914cbc7fcdf6a6be9a35e477cebaafbb15c8d38577dbaddc3f9fa" Feb 16 15:14:41 crc kubenswrapper[4705]: I0216 15:14:41.447162 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:41 crc kubenswrapper[4705]: E0216 15:14:41.447502 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:42 crc kubenswrapper[4705]: I0216 15:14:42.469066 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:14:43 crc kubenswrapper[4705]: I0216 15:14:43.486094 4705 generic.go:334] "Generic (PLEG): container finished" podID="ddb24908-6026-4fe7-81b6-345402c9398e" containerID="c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4" exitCode=0 Feb 16 15:14:43 crc kubenswrapper[4705]: I0216 15:14:43.488274 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerDied","Data":"c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4"} Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.185211 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.218525 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.219204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.219547 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") pod \"302aee2f-61be-439f-a04e-356243bb65b6\" (UID: \"302aee2f-61be-439f-a04e-356243bb65b6\") " Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.247609 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz" (OuterVolumeSpecName: "kube-api-access-4nfsz") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "kube-api-access-4nfsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.260284 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.301061 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302aee2f-61be-439f-a04e-356243bb65b6" (UID: "302aee2f-61be-439f-a04e-356243bb65b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349559 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nfsz\" (UniqueName: \"kubernetes.io/projected/302aee2f-61be-439f-a04e-356243bb65b6-kube-api-access-4nfsz\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349627 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.349644 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/302aee2f-61be-439f-a04e-356243bb65b6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4vj9p" event={"ID":"302aee2f-61be-439f-a04e-356243bb65b6","Type":"ContainerDied","Data":"220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf"} Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529255 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220abc57569b5298e6577a0566c5887691451f9c125f7c627c5141ffee3feccf" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.529293 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4vj9p" Feb 16 15:14:46 crc kubenswrapper[4705]: I0216 15:14:46.970479 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169039 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169143 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169198 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169309 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169437 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169469 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.169489 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") pod \"ddb24908-6026-4fe7-81b6-345402c9398e\" (UID: \"ddb24908-6026-4fe7-81b6-345402c9398e\") " Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.171346 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ddb24908-6026-4fe7-81b6-345402c9398e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.174623 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g" (OuterVolumeSpecName: "kube-api-access-5gc7g") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "kube-api-access-5gc7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.174960 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts" (OuterVolumeSpecName: "scripts") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.175834 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.243224 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data" (OuterVolumeSpecName: "config-data") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.250617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb24908-6026-4fe7-81b6-345402c9398e" (UID: "ddb24908-6026-4fe7-81b6-345402c9398e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273137 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gc7g\" (UniqueName: \"kubernetes.io/projected/ddb24908-6026-4fe7-81b6-345402c9398e-kube-api-access-5gc7g\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273183 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273198 4705 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273213 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.273227 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb24908-6026-4fe7-81b6-345402c9398e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.451247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.550450 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551111 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551130 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551166 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551174 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551187 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551195 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: E0216 15:14:47.551210 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="init" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551217 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="init" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551459 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="302aee2f-61be-439f-a04e-356243bb65b6" containerName="barbican-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551483 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" containerName="cinder-db-sync" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.551504 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee1b858-e5e9-4163-9fe6-e503be62c4f7" containerName="dnsmasq-dns" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.552837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571708 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-scncd" event={"ID":"ddb24908-6026-4fe7-81b6-345402c9398e","Type":"ContainerDied","Data":"25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed"} Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571756 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c35aaf8f4af9631df07d9053074c5f0aa7a4b2f00e10128c4a4c8292d954ed" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.571832 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-scncd" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575100 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575227 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575431 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4fhnl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.575849 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.582225 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerStarted","Data":"ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de"} Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586544 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" containerID="cri-o://9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586718 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586772 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" containerID="cri-o://ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.586819 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" containerID="cri-o://7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" gracePeriod=30 Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.592870 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593398 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593522 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.593830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.594201 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.594563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.607295 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.649716 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709590 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709679 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709844 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.709989 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710101 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.710187 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.713715 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edea8308-f2c7-4f10-993c-974327a36727-logs\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.718106 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data-custom\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.733781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-config-data\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.735040 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edea8308-f2c7-4f10-993c-974327a36727-combined-ca-bundle\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.744757 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qmz\" (UniqueName: \"kubernetes.io/projected/edea8308-f2c7-4f10-993c-974327a36727-kube-api-access-22qmz\") pod \"barbican-keystone-listener-5bf77f7566-frgcc\" (UID: \"edea8308-f2c7-4f10-993c-974327a36727\") " pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.770544 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.774837 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.796077 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812269 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.812351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.813072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-logs\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.817179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.822581 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-combined-ca-bundle\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.833486 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data-custom\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.838077 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-config-data\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.854041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7646m\" (UniqueName: \"kubernetes.io/projected/eff171da-ce4a-4c88-b7bd-b7b88e6ad322-kube-api-access-7646m\") pod \"barbican-worker-68c59b585f-gvjjl\" (UID: \"eff171da-ce4a-4c88-b7bd-b7b88e6ad322\") " pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.911592 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923681 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923861 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923903 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923946 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.923990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.964750 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c59b585f-gvjjl" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.971448 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.973898 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:47 crc kubenswrapper[4705]: I0216 15:14:47.991865 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.005693 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.028146 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.028949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029126 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.029164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.030575 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.031099 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.032859 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.033260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.033689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.072101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"dnsmasq-dns-85ff748b95-qlq6b\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.120203 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.134072 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135659 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.135930 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.136947 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292667 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292860 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292920 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.292950 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.293132 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.294217 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.367959 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.384925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.406925 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.424404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"barbican-api-56f6fcbd5d-ql4gk\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.479720 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.612810 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.615940 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626216 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626724 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6g79l" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626874 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.626998 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.639889 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.666308 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" exitCode=2 Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.666406 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874"} Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.749604 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796541 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796709 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796761 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.796784 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.822475 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.825258 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.875933 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900106 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900249 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.900389 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.901147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.905692 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.909842 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.912995 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.916164 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.916220 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.920701 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.921129 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.953265 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:48 crc kubenswrapper[4705]: I0216 15:14:48.959976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"cinder-scheduler-0\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007074 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007157 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007193 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007363 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007496 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007754 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.007985 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008068 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008236 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.008456 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.046304 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.048874 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bf77f7566-frgcc"] Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113036 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113080 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113148 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113169 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113187 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113213 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113289 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113345 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113414 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.113444 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.117494 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118166 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.118814 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.119358 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.119933 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.125917 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.130919 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.132143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.134439 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.134897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.153890 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"cinder-api-0\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.166595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"dnsmasq-dns-5c9776ccc5-bkmwk\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.175485 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.275128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.305333 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c59b585f-gvjjl"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.349126 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff171da_ce4a_4c88_b7bd_b7b88e6ad322.slice/crio-1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254 WatchSource:0}: Error finding container 1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254: Status 404 returned error can't find the container with id 1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254 Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.396818 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.418642 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f5bc83_36d4_41e0_8b6f_2d0854d7a171.slice/crio-c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee WatchSource:0}: Error finding container c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee: Status 404 returned error can't find the container with id c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.606361 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.705726 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"494073a82ffb15d51ca9ccf70ddd818083ecfa9ff2e728289031a38cb377d7c0"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.714005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"ea075161fc0ba88a8c9c3d0eaf5da57991df70ce7cba9dc4943ed932367998a9"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.720631 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerStarted","Data":"c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.723443 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"1c09eadb99d613bc85130c76cf3fab952fc4000f8f699faa71d9527e30c09254"} Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.876429 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:49 crc kubenswrapper[4705]: W0216 15:14:49.907184 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9fe5954_9b6f_4ba1_b8c5_fe8367c66051.slice/crio-e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1 WatchSource:0}: Error finding container e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1: Status 404 returned error can't find the container with id e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1 Feb 16 15:14:49 crc kubenswrapper[4705]: I0216 15:14:49.966426 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.169666 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:50 crc kubenswrapper[4705]: W0216 15:14:50.198285 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69bc6a88_b325_43bd_af4c_55283723a765.slice/crio-50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7 WatchSource:0}: Error finding container 50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7: Status 404 returned error can't find the container with id 50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7 Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.757735 4705 generic.go:334] "Generic (PLEG): container finished" podID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerID="9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d" exitCode=0 Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.798110 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerDied","Data":"9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.798314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.803288 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerStarted","Data":"4cd8d63ef6157fd647119bfab51e4fd5281201daf21b70697f5351220cfe9c1c"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.824606 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7"} Feb 16 15:14:50 crc kubenswrapper[4705]: I0216 15:14:50.832534 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.573486 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.737473 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738215 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738362 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738446 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.738500 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") pod \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\" (UID: \"56f5bc83-36d4-41e0-8b6f-2d0854d7a171\") " Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.768614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs" (OuterVolumeSpecName: "kube-api-access-6r7hs") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "kube-api-access-6r7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.782764 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.792953 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config" (OuterVolumeSpecName: "config") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.793868 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.804244 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.805290 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56f5bc83-36d4-41e0-8b6f-2d0854d7a171" (UID: "56f5bc83-36d4-41e0-8b6f-2d0854d7a171"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.842988 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7hs\" (UniqueName: \"kubernetes.io/projected/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-kube-api-access-6r7hs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843035 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843049 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843061 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843073 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.843086 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f5bc83-36d4-41e0-8b6f-2d0854d7a171-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.850981 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.856123 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" exitCode=0 Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.856190 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858845 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" event={"ID":"56f5bc83-36d4-41e0-8b6f-2d0854d7a171","Type":"ContainerDied","Data":"c234aab2b5987a184db4b9c3e78803d1b113bb91a28bac66a4865b9eee8979ee"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858906 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-qlq6b" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.858917 4705 scope.go:117] "RemoveContainer" containerID="9705e72874e46f0081958ec36bf68284093b2887f407f1b198ebd0d1287ad79d" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863018 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerStarted","Data":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.863647 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.868008 4705 generic.go:334] "Generic (PLEG): container finished" podID="541411df-f636-4dab-a4e2-2ecc8933f236" containerID="40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3" exitCode=0 Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.868050 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3"} Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.897744 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podStartSLOduration=4.897722185 podStartE2EDuration="4.897722185s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:51.883893016 +0000 UTC m=+1286.068870092" watchObservedRunningTime="2026-02-16 15:14:51.897722185 +0000 UTC m=+1286.082699261" Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.969184 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:51 crc kubenswrapper[4705]: I0216 15:14:51.980975 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-qlq6b"] Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.065111 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.066388 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:52 crc kubenswrapper[4705]: E0216 15:14:52.066596 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.067839 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.073048 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.192:9696/\": dial tcp 10.217.0.192:9696: connect: connection refused" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.131198 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.445114 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" path="/var/lib/kubelet/pods/56f5bc83-36d4-41e0-8b6f-2d0854d7a171/volumes" Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.894235 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e"} Feb 16 15:14:52 crc kubenswrapper[4705]: I0216 15:14:52.901063 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:14:52 crc kubenswrapper[4705]: E0216 15:14:52.901352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-77886f8dfb-96bnn_openstack(b078dc5a-bbed-4006-9d76-370271a27353)\"" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.417645 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.516046 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.516328 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77886f8dfb-96bnn" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" containerID="cri-o://9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.750427 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:54 crc kubenswrapper[4705]: E0216 15:14:54.751817 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.751900 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.752242 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f5bc83-36d4-41e0-8b6f-2d0854d7a171" containerName="init" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.753723 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.758700 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.759032 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.769724 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851310 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851635 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851657 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851716 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.851875 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.881723 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.886159 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.935051 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.949135 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerStarted","Data":"baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.949678 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" containerID="cri-o://7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.950268 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.950782 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" containerID="cri-o://baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" gracePeriod=30 Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.956508 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.961703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.961928 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962168 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962337 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962829 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.962979 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963064 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963148 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963329 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.963454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.965210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.965524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.966111 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"18b2d58e32816cdf3a5f332aab5d3f8d5c7adef8ee63b9669545a686d9a96ee9"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.966166 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" event={"ID":"edea8308-f2c7-4f10-993c-974327a36727","Type":"ContainerStarted","Data":"8c54e77aa75a6d4e90c4a9051bd2351aa28e06ee5a65c76f41472f2d0ad3f455"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.968189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-logs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.972700 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerStarted","Data":"8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd"} Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.983604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-public-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.984169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-internal-tls-certs\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.984354 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data-custom\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.988189 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-config-data\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:54 crc kubenswrapper[4705]: I0216 15:14:54.989930 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2k4z\" (UniqueName: \"kubernetes.io/projected/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-kube-api-access-l2k4z\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.001150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"78738dd29b3820c40646249d22aa469be73b0b7da171598d84211a0e2e406853"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.001219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c59b585f-gvjjl" event={"ID":"eff171da-ce4a-4c88-b7bd-b7b88e6ad322","Type":"ContainerStarted","Data":"f02dbd745e13763de44f811718db3b8c4ba4c2c33d9ecb59872a53ccee0886dc"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.026286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerStarted","Data":"e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1"} Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.026769 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.037149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2c420d-8288-48f7-b53e-f480bf6d5a7f-combined-ca-bundle\") pod \"barbican-api-675dd58676-vnqw2\" (UID: \"ab2c420d-8288-48f7-b53e-f480bf6d5a7f\") " pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.050698 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.050657104 podStartE2EDuration="7.050657104s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:54.991175282 +0000 UTC m=+1289.176152368" watchObservedRunningTime="2026-02-16 15:14:55.050657104 +0000 UTC m=+1289.235634180" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.072643 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.072784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080307 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.080384 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.083170 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.083347 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.097229 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.098458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-internal-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.099147 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-combined-ca-bundle\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.099797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-public-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.103934 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.110180 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-httpd-config\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.116379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7edca3b-82f6-4cfb-9781-664afa855ba8-ovndb-tls-certs\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.144128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6qw\" (UniqueName: \"kubernetes.io/projected/f7edca3b-82f6-4cfb-9781-664afa855ba8-kube-api-access-2r6qw\") pod \"neutron-66f94f69bf-82g78\" (UID: \"f7edca3b-82f6-4cfb-9781-664afa855ba8\") " pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.142966 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bf77f7566-frgcc" podStartSLOduration=3.65196585 podStartE2EDuration="8.142129697s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.159720945 +0000 UTC m=+1283.344698021" lastFinishedPulling="2026-02-16 15:14:53.649884792 +0000 UTC m=+1287.834861868" observedRunningTime="2026-02-16 15:14:55.019121358 +0000 UTC m=+1289.204098444" watchObservedRunningTime="2026-02-16 15:14:55.142129697 +0000 UTC m=+1289.327106773" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.158467 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.017000586 podStartE2EDuration="7.158439636s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.9130172 +0000 UTC m=+1284.097994276" lastFinishedPulling="2026-02-16 15:14:51.05445625 +0000 UTC m=+1285.239433326" observedRunningTime="2026-02-16 15:14:55.04161955 +0000 UTC m=+1289.226596626" watchObservedRunningTime="2026-02-16 15:14:55.158439636 +0000 UTC m=+1289.343416712" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.218138 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" podStartSLOduration=7.218107674 podStartE2EDuration="7.218107674s" podCreationTimestamp="2026-02-16 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:55.06685486 +0000 UTC m=+1289.251831936" watchObservedRunningTime="2026-02-16 15:14:55.218107674 +0000 UTC m=+1289.403084740" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.233042 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68c59b585f-gvjjl" podStartSLOduration=3.964790519 podStartE2EDuration="8.232827348s" podCreationTimestamp="2026-02-16 15:14:47 +0000 UTC" firstStartedPulling="2026-02-16 15:14:49.392299296 +0000 UTC m=+1283.577276372" lastFinishedPulling="2026-02-16 15:14:53.660336125 +0000 UTC m=+1287.845313201" observedRunningTime="2026-02-16 15:14:55.136277372 +0000 UTC m=+1289.321254458" watchObservedRunningTime="2026-02-16 15:14:55.232827348 +0000 UTC m=+1289.417804424" Feb 16 15:14:55 crc kubenswrapper[4705]: I0216 15:14:55.244266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.042338 4705 generic.go:334] "Generic (PLEG): container finished" podID="69bc6a88-b325-43bd-af4c-55283723a765" containerID="7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" exitCode=143 Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.043588 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9"} Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.176463 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-675dd58676-vnqw2"] Feb 16 15:14:56 crc kubenswrapper[4705]: W0216 15:14:56.178784 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2c420d_8288_48f7_b53e_f480bf6d5a7f.slice/crio-c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4 WatchSource:0}: Error finding container c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4: Status 404 returned error can't find the container with id c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4 Feb 16 15:14:56 crc kubenswrapper[4705]: I0216 15:14:56.484345 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f94f69bf-82g78"] Feb 16 15:14:56 crc kubenswrapper[4705]: W0216 15:14:56.524700 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7edca3b_82f6_4cfb_9781_664afa855ba8.slice/crio-16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02 WatchSource:0}: Error finding container 16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02: Status 404 returned error can't find the container with id 16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02 Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.085867 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"eb9bef067cacc8899ddad2f91d049253141374552bd4696dfbab09ae65a28437"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.086559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"16ba3099ee67684bc63739c914005adda05418bd1c7583db66826cfd69ac1d02"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"e85400a6e39d43f0fdd9e551002e83837fca9acd0c3de06ca848b1cadbe00920"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101426 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"2d8e32a95ce5bb87765b8abffc1cd4ec9203bf1b92b0c04d4ab0889c6cb2e6e5"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101448 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-675dd58676-vnqw2" event={"ID":"ab2c420d-8288-48f7-b53e-f480bf6d5a7f","Type":"ContainerStarted","Data":"c3a675bf4490daf65bedbfa8f6c01dc77dc451383742d71a88c7ddd964ab2cb4"} Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101488 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:57 crc kubenswrapper[4705]: I0216 15:14:57.101509 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.115036 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f94f69bf-82g78" event={"ID":"f7edca3b-82f6-4cfb-9781-664afa855ba8","Type":"ContainerStarted","Data":"6f67e5b7df9c0341ab3be966b2623ff8564c7d207abc503b1a0a866c06b9680d"} Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.115736 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.142648 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66f94f69bf-82g78" podStartSLOduration=4.1426233 podStartE2EDuration="4.1426233s" podCreationTimestamp="2026-02-16 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:58.131792325 +0000 UTC m=+1292.316769411" watchObservedRunningTime="2026-02-16 15:14:58.1426233 +0000 UTC m=+1292.327600376" Feb 16 15:14:58 crc kubenswrapper[4705]: I0216 15:14:58.144820 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-675dd58676-vnqw2" podStartSLOduration=4.144805671 podStartE2EDuration="4.144805671s" podCreationTimestamp="2026-02-16 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.141406823 +0000 UTC m=+1291.326383899" watchObservedRunningTime="2026-02-16 15:14:58.144805671 +0000 UTC m=+1292.329782757" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.048312 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.178655 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.335445 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.335763 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" containerID="cri-o://f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" gracePeriod=10 Feb 16 15:14:59 crc kubenswrapper[4705]: I0216 15:14:59.906004 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.044195 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174516 4705 generic.go:334] "Generic (PLEG): container finished" podID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerID="f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" exitCode=0 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174718 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218"} Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" event={"ID":"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d","Type":"ContainerDied","Data":"18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251"} Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174809 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c9029ea0424e632d8d42d7cf6b7457772241ad33ce395b5abed00841718251" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.174863 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" containerID="cri-o://8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" gracePeriod=30 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.175001 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" containerID="cri-o://bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" gracePeriod=30 Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.191464 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.193545 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.196743 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.196819 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.207253 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273177 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273403 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.273717 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377811 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.377932 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.379577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.387796 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.401306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.414129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"collect-profiles-29520915-lwjnm\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.456664 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479446 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479880 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.479977 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") pod \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\" (UID: \"736c4c77-178b-40b8-8f6f-adb8b4b1ea6d\") " Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.511823 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99" (OuterVolumeSpecName: "kube-api-access-vlp99") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "kube-api-access-vlp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.584603 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlp99\" (UniqueName: \"kubernetes.io/projected/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-kube-api-access-vlp99\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.628070 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.688732 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910542 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config" (OuterVolumeSpecName: "config") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.910567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.914097 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" (UID: "736c4c77-178b-40b8-8f6f-adb8b4b1ea6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.942821 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998037 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998098 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998111 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4705]: I0216 15:15:00.998124 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.086496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.106114 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.188911 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.199678 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.225658 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerID="8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.225789 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.240547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77886f8dfb-96bnn_b078dc5a-bbed-4006-9d76-370271a27353/neutron-httpd/2.log" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241675 4705 generic.go:334] "Generic (PLEG): container finished" podID="b078dc5a-bbed-4006-9d76-370271a27353" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241783 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gm2hh" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241784 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241869 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77886f8dfb-96bnn" event={"ID":"b078dc5a-bbed-4006-9d76-370271a27353","Type":"ContainerDied","Data":"8d6f6b83879b1871c1ce4b4df4249213068c9c5c2acaf7af7da436588553b117"} Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241907 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.241810 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77886f8dfb-96bnn" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313394 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313645 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313771 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.313857 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") pod \"b078dc5a-bbed-4006-9d76-370271a27353\" (UID: \"b078dc5a-bbed-4006-9d76-370271a27353\") " Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.325560 4705 scope.go:117] "RemoveContainer" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.329510 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.333708 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65" (OuterVolumeSpecName: "kube-api-access-fgx65") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "kube-api-access-fgx65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.339810 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.353933 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gm2hh"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.408576 4705 scope.go:117] "RemoveContainer" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.408892 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.432053 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": container with ID starting with d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9 not found: ID does not exist" containerID="d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.432113 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9"} err="failed to get container status \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": rpc error: code = NotFound desc = could not find container \"d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9\": container with ID starting with d427cc67ec159cbf3e78e5565e18298d1e8832544389968dd35bc1ea5f5e55a9 not found: ID does not exist" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.432141 4705 scope.go:117] "RemoveContainer" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440445 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440482 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.440493 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgx65\" (UniqueName: \"kubernetes.io/projected/b078dc5a-bbed-4006-9d76-370271a27353-kube-api-access-fgx65\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.442232 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": container with ID starting with 9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e not found: ID does not exist" containerID="9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.442266 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e"} err="failed to get container status \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": rpc error: code = NotFound desc = could not find container \"9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e\": container with ID starting with 9be67d5601b16343c6febccf20054b2b6e7533cc395cbe3da2ec7cc09bca612e not found: ID does not exist" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.448502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config" (OuterVolumeSpecName: "config") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.450632 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451280 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451301 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451320 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451327 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451339 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451345 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451359 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451378 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451395 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451400 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: E0216 15:15:01.451451 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="init" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451457 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="init" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451654 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-api" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451670 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451690 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451709 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b078dc5a-bbed-4006-9d76-370271a27353" containerName="neutron-httpd" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.451724 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" containerName="dnsmasq-dns" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.453120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.485500 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.501168 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.509090 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b078dc5a-bbed-4006-9d76-370271a27353" (UID: "b078dc5a-bbed-4006-9d76-370271a27353"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:01 crc kubenswrapper[4705]: W0216 15:15:01.522844 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6f056a_614c_4e3d_9bfe_de451b1d951d.slice/crio-bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7 WatchSource:0}: Error finding container bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7: Status 404 returned error can't find the container with id bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7 Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.545917 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546092 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546229 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546264 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546609 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.546664 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.547310 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.547327 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b078dc5a-bbed-4006-9d76-370271a27353-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.653735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.653962 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654165 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654311 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.654399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.657016 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4122899e-95db-413a-ac71-f0574969753a-logs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.664815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-scripts\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.665724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-combined-ca-bundle\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.673897 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-internal-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.677853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9r2\" (UniqueName: \"kubernetes.io/projected/4122899e-95db-413a-ac71-f0574969753a-kube-api-access-pk9r2\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.684735 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.685182 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.685239 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.687691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-public-tls-certs\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.688760 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4122899e-95db-413a-ac71-f0574969753a-config-data\") pod \"placement-6599894f76-dcwz8\" (UID: \"4122899e-95db-413a-ac71-f0574969753a\") " pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.707767 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77886f8dfb-96bnn"] Feb 16 15:15:01 crc kubenswrapper[4705]: I0216 15:15:01.794786 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.255899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerStarted","Data":"12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f"} Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.256257 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerStarted","Data":"bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7"} Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.321627 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" podStartSLOduration=2.321599534 podStartE2EDuration="2.321599534s" podCreationTimestamp="2026-02-16 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:02.293638597 +0000 UTC m=+1296.478615673" watchObservedRunningTime="2026-02-16 15:15:02.321599534 +0000 UTC m=+1296.506576630" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.475897 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c4c77-178b-40b8-8f6f-adb8b4b1ea6d" path="/var/lib/kubelet/pods/736c4c77-178b-40b8-8f6f-adb8b4b1ea6d/volumes" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.477032 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b078dc5a-bbed-4006-9d76-370271a27353" path="/var/lib/kubelet/pods/b078dc5a-bbed-4006-9d76-370271a27353/volumes" Feb 16 15:15:02 crc kubenswrapper[4705]: I0216 15:15:02.809058 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6599894f76-dcwz8"] Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.298323 4705 generic.go:334] "Generic (PLEG): container finished" podID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerID="bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" exitCode=0 Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.300583 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.304460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"10b69de5c0b53c7b82189dc1ee98e780b478862bde93d7141a4094e042544984"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.304963 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"d00af7746386b7f352c8fff117ea38852e34da412e852defcda4a225f579e064"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.331449 4705 generic.go:334] "Generic (PLEG): container finished" podID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerID="12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f" exitCode=0 Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.331504 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerDied","Data":"12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f"} Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.415058 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.552814 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553436 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553553 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553768 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.553900 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") pod \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\" (UID: \"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051\") " Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.558486 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.564391 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts" (OuterVolumeSpecName: "scripts") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.564973 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.571740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7" (OuterVolumeSpecName: "kube-api-access-jpwc7") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "kube-api-access-jpwc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.635275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657194 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwc7\" (UniqueName: \"kubernetes.io/projected/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-kube-api-access-jpwc7\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657677 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657770 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657853 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.657961 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.737262 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data" (OuterVolumeSpecName: "config-data") pod "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" (UID: "e9fe5954-9b6f-4ba1-b8c5-fe8367c66051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4705]: I0216 15:15:03.763519 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.325625 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.202:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.378826 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9fe5954-9b6f-4ba1-b8c5-fe8367c66051","Type":"ContainerDied","Data":"e16d04a7f9d423e5d1a7cda000b3cafa9d337f27903270f442c980a7edf294b1"} Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.378916 4705 scope.go:117] "RemoveContainer" containerID="8f26767c276d445f1009e592eb27c8864a4735b03be5333ab37f03b4b14320dd" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.379144 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406654 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6599894f76-dcwz8" event={"ID":"4122899e-95db-413a-ac71-f0574969753a","Type":"ContainerStarted","Data":"e772edf09fbb657c700bb40b6eb65545b240b2838a4027119ade34b9d4d3fc40"} Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.406936 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.437292 4705 scope.go:117] "RemoveContainer" containerID="bb8a8bd06610a977547020f28b005ef33562f444a80a73905635dff3873c8f4e" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.513916 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.559861 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.607663 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.608165 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6599894f76-dcwz8" podStartSLOduration=3.608138488 podStartE2EDuration="3.608138488s" podCreationTimestamp="2026-02-16 15:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:04.49301322 +0000 UTC m=+1298.677990296" watchObservedRunningTime="2026-02-16 15:15:04.608138488 +0000 UTC m=+1298.793115564" Feb 16 15:15:04 crc kubenswrapper[4705]: E0216 15:15:04.609032 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.609073 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: E0216 15:15:04.609128 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.609153 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.611038 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="cinder-scheduler" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.611096 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" containerName="probe" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.614176 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.617879 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716454 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716608 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.716919 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717154 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717266 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.717490 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.736465 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821276 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821426 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.821638 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.829068 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c85708f6-f2cb-4248-94e9-7c7763e88275-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.844840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.847112 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-config-data\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.850091 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.855255 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvnq\" (UniqueName: \"kubernetes.io/projected/c85708f6-f2cb-4248-94e9-7c7763e88275-kube-api-access-cmvnq\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.860087 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85708f6-f2cb-4248-94e9-7c7763e88275-scripts\") pod \"cinder-scheduler-0\" (UID: \"c85708f6-f2cb-4248-94e9-7c7763e88275\") " pod="openstack/cinder-scheduler-0" Feb 16 15:15:04 crc kubenswrapper[4705]: I0216 15:15:04.949704 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.098663 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234333 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") pod \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\" (UID: \"4c6f056a-614c-4e3d-9bfe-de451b1d951d\") " Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.234986 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.235548 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f056a-614c-4e3d-9bfe-de451b1d951d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.243536 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2" (OuterVolumeSpecName: "kube-api-access-cbvb2") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "kube-api-access-cbvb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.246494 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c6f056a-614c-4e3d-9bfe-de451b1d951d" (UID: "4c6f056a-614c-4e3d-9bfe-de451b1d951d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.340354 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbvb2\" (UniqueName: \"kubernetes.io/projected/4c6f056a-614c-4e3d-9bfe-de451b1d951d-kube-api-access-cbvb2\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.340423 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6f056a-614c-4e3d-9bfe-de451b1d951d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.420851 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.420852 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm" event={"ID":"4c6f056a-614c-4e3d-9bfe-de451b1d951d","Type":"ContainerDied","Data":"bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7"} Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.421024 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfda3b70aece51727a8a9fabab74fe1a183106ebabf798fe30aa1e17c1eca4c7" Feb 16 15:15:05 crc kubenswrapper[4705]: E0216 15:15:05.522746 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85708f6_f2cb_4248_94e9_7c7763e88275.slice/crio-db3cd008d3efd4fa524bd570281b3d6c1ff70d241d540423b4cab74482c76e95\": RecentStats: unable to find data in memory cache]" Feb 16 15:15:05 crc kubenswrapper[4705]: I0216 15:15:05.562940 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.446046 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9fe5954-9b6f-4ba1-b8c5-fe8367c66051" path="/var/lib/kubelet/pods/e9fe5954-9b6f-4ba1-b8c5-fe8367c66051/volumes" Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.517558 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"16fc55e77902de5edf15730327b855ec3327bf7c048124bc8bfb673a6b5a034a"} Feb 16 15:15:06 crc kubenswrapper[4705]: I0216 15:15:06.517673 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"db3cd008d3efd4fa524bd570281b3d6c1ff70d241d540423b4cab74482c76e95"} Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.340667 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.429937 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.535324 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c85708f6-f2cb-4248-94e9-7c7763e88275","Type":"ContainerStarted","Data":"44abc1a158aec0b2637d9395912561ba28ea8b4333dc68c92cb6e190ad00ba6d"} Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.577810 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.577778232 podStartE2EDuration="3.577778232s" podCreationTimestamp="2026-02-16 15:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:07.565119826 +0000 UTC m=+1301.750096902" watchObservedRunningTime="2026-02-16 15:15:07.577778232 +0000 UTC m=+1301.762755308" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.624503 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-675dd58676-vnqw2" Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719432 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" containerID="cri-o://40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" gracePeriod=30 Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.719998 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" containerID="cri-o://b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" gracePeriod=30 Feb 16 15:15:07 crc kubenswrapper[4705]: I0216 15:15:07.916857 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cd49d8b6b-6gdmx" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.204671 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:08 crc kubenswrapper[4705]: E0216 15:15:08.205889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.205910 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.206276 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" containerName="collect-profiles" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.207347 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211516 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211596 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.211998 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-j8dj6" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.220827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377263 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.377671 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.378132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.488949 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489137 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489300 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.489328 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.490690 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.506349 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.506488 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4881941b-eb71-45be-aa51-0e8431b29e89-openstack-config-secret\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.510518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnzw\" (UniqueName: \"kubernetes.io/projected/4881941b-eb71-45be-aa51-0e8431b29e89-kube-api-access-bwnzw\") pod \"openstackclient\" (UID: \"4881941b-eb71-45be-aa51-0e8431b29e89\") " pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.526876 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.573247 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" exitCode=143 Feb 16 15:15:08 crc kubenswrapper[4705]: I0216 15:15:08.574621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.170971 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.586343 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4881941b-eb71-45be-aa51-0e8431b29e89","Type":"ContainerStarted","Data":"57de06ba06664890884a054c4865cc4af2645844c7ee8d8f5de3a66e901861ed"} Feb 16 15:15:09 crc kubenswrapper[4705]: I0216 15:15:09.951006 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:15:10 crc kubenswrapper[4705]: I0216 15:15:10.917200 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:40492->10.217.0.199:9311: read: connection reset by peer" Feb 16 15:15:10 crc kubenswrapper[4705]: I0216 15:15:10.917310 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:40488->10.217.0.199:9311: read: connection reset by peer" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.507310 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514389 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514452 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.514482 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.516031 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs" (OuterVolumeSpecName: "logs") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.524208 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn" (OuterVolumeSpecName: "kube-api-access-gbdxn") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "kube-api-access-gbdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.616638 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.616721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") pod \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\" (UID: \"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6\") " Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.617132 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.617153 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdxn\" (UniqueName: \"kubernetes.io/projected/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-kube-api-access-gbdxn\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.629441 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.630154 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data" (OuterVolumeSpecName: "config-data") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642330 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" exitCode=0 Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642418 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642461 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" event={"ID":"7d30a0c8-a0b3-4655-ab70-4b01c0c732f6","Type":"ContainerDied","Data":"494073a82ffb15d51ca9ccf70ddd818083ecfa9ff2e728289031a38cb377d7c0"} Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642488 4705 scope.go:117] "RemoveContainer" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.642708 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56f6fcbd5d-ql4gk" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.658231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" (UID: "7d30a0c8-a0b3-4655-ab70-4b01c0c732f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.685047 4705 scope.go:117] "RemoveContainer" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.709023 4705 scope.go:117] "RemoveContainer" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: E0216 15:15:11.710173 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": container with ID starting with b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4 not found: ID does not exist" containerID="b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.710222 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4"} err="failed to get container status \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": rpc error: code = NotFound desc = could not find container \"b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4\": container with ID starting with b918e4068b5d224dc0ac20dd6a838f46f5289bd4564fbce0b38f65d250d963e4 not found: ID does not exist" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.710255 4705 scope.go:117] "RemoveContainer" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: E0216 15:15:11.711066 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": container with ID starting with 40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6 not found: ID does not exist" containerID="40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.711084 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6"} err="failed to get container status \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": rpc error: code = NotFound desc = could not find container \"40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6\": container with ID starting with 40c56139e493dd1bcf404148d9d97700b7f0ccec91c3312fe6127bcd4ef2f3e6 not found: ID does not exist" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721264 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721329 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:11 crc kubenswrapper[4705]: I0216 15:15:11.721340 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.058066 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.084197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56f6fcbd5d-ql4gk"] Feb 16 15:15:12 crc kubenswrapper[4705]: I0216 15:15:12.445647 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" path="/var/lib/kubelet/pods/7d30a0c8-a0b3-4655-ab70-4b01c0c732f6/volumes" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.799111 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:13 crc kubenswrapper[4705]: E0216 15:15:13.799969 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.799987 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: E0216 15:15:13.800008 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.800015 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.800287 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api-log" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.801514 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d30a0c8-a0b3-4655-ab70-4b01c0c732f6" containerName="barbican-api" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.803520 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.805592 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.811240 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.811246 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.819538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891586 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891625 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891689 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891778 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.891810 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993850 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993902 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993929 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.993999 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994201 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994239 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.994353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.995906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-log-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:13 crc kubenswrapper[4705]: I0216 15:15:13.997097 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-run-httpd\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.003168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-combined-ca-bundle\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.007009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-etc-swift\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.010067 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-public-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.010708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-config-data\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.020423 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns2bg\" (UniqueName: \"kubernetes.io/projected/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-kube-api-access-ns2bg\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.027863 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/811fab8b-dbb5-4985-b67f-d3671ea6ff9b-internal-tls-certs\") pod \"swift-proxy-85b76884b7-g4c57\" (UID: \"811fab8b-dbb5-4985-b67f-d3671ea6ff9b\") " pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.167545 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:14 crc kubenswrapper[4705]: I0216 15:15:14.815518 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b76884b7-g4c57"] Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.253711 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.755742 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"b2e44d42e6591bd938539ffa069132e365bc1444be32785ab2c8624355e7c642"} Feb 16 15:15:15 crc kubenswrapper[4705]: I0216 15:15:15.756232 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"e268c748fcdb24ba71ce2f7ff09d912ad538b7890dc0d84a77ac934ded34dee4"} Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.734639 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.736936 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.760110 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.855479 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.857507 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.877355 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.879331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.889193 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.891578 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.891852 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.892102 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:16 crc kubenswrapper[4705]: I0216 15:15:16.959228 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002108 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002331 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.002454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.007868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.007997 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.008650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.008930 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.042885 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"nova-api-db-create-mqnvt\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.088713 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.099479 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.101586 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.115767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.115924 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.116286 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.116526 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.118724 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.119534 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.131820 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.140388 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"nova-cell0-db-create-x6wr8\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.147397 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"nova-api-2d9b-account-create-update-wlxl6\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.174494 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.176606 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.187942 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.191924 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.198780 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.227214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.228571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.229772 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.239144 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.239415 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.324922 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.326904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.331933 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.349064 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.349963 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.350501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.350626 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.352122 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.352540 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.353223 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.373648 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.379220 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.394987 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7v2x2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.397016 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.397777 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.404428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.442213 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"nova-cell0-de3f-account-create-update-d2gp8\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.443624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"nova-cell1-db-create-6nsdt\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454261 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454361 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.454413 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.558876 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559478 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559646 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.559983 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.560584 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.560983 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.561675 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.573689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.574973 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.578880 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.582144 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.592169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.600171 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.612699 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"heat-engine-7b7cc9557b-77tq2\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.616357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"nova-cell1-ba40-account-create-update-8d7bg\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.627574 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.647676 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.649667 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.666788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667063 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667125 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667175 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667251 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.667275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.670108 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.690549 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.693805 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.699358 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.700530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.707453 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.745998 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.769031 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777412 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777465 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777652 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777684 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777748 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777782 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.777821 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.778129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.779091 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.779875 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780281 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780467 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.780987 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.781319 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.781327 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.805432 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"dnsmasq-dns-7756b9d78c-zg26f\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.811685 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerID="ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" exitCode=137 Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.811730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de"} Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.858224 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883155 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883184 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883700 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883755 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883788 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.883847 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.887483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.888498 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.891702 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.893056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.894968 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.895649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.902207 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"heat-api-656d9cf494-c6m8t\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.904128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"heat-cfnapi-57d4846c7f-r8fqk\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.930717 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:17 crc kubenswrapper[4705]: I0216 15:15:17.989854 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:18 crc kubenswrapper[4705]: I0216 15:15:18.079896 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:22 crc kubenswrapper[4705]: I0216 15:15:22.232167 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.185:3000/\": dial tcp 10.217.0.185:3000: connect: connection refused" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.470468 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.472950 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.510813 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.512608 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.547421 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594396 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.594669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.625348 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.627411 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.661637 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.692415 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702643 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702735 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.702985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.703045 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.703164 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.721027 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.722132 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-config-data-custom\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.740266 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada71f46-f923-4974-9776-ed92f20c79b1-combined-ca-bundle\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.756955 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9w9l\" (UniqueName: \"kubernetes.io/projected/ada71f46-f923-4974-9776-ed92f20c79b1-kube-api-access-r9w9l\") pod \"heat-engine-7b7bf99b56-hm6dc\" (UID: \"ada71f46-f923-4974-9776-ed92f20c79b1\") " pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807628 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807654 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807700 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.807817 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.821247 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.823905 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.824793 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.830242 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.872149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"heat-api-74b44f99fd-mnr7j\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.910616 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911242 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.911261 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.915065 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.939789 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.940551 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:24 crc kubenswrapper[4705]: I0216 15:15:24.942228 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"heat-cfnapi-7cfb944475-hpwlf\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.013346 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.165387 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.296939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66f94f69bf-82g78" Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.417684 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.418049 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d799457-fvqj6" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" containerID="cri-o://b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" gracePeriod=30 Feb 16 15:15:25 crc kubenswrapper[4705]: I0216 15:15:25.418495 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d799457-fvqj6" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" containerID="cri-o://338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" gracePeriod=30 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.010870 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerID="338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" exitCode=0 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.010974 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37"} Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.017235 4705 generic.go:334] "Generic (PLEG): container finished" podID="69bc6a88-b325-43bd-af4c-55283723a765" containerID="baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" exitCode=137 Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.017303 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac"} Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.222533 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.267538 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.269618 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.273222 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.273413 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.292630 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.319575 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357509 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357601 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357771 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.357959 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.359045 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.361166 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.364938 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.368558 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.382732 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460476 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460555 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460669 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460713 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460750 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460789 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460819 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460937 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.460978 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.466049 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.466161 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.471270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-combined-ca-bundle\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.475536 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data-custom\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.477307 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-internal-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.477646 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-public-tls-certs\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.481602 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72hq\" (UniqueName: \"kubernetes.io/projected/08b1576e-92c8-407b-b821-e0cbfe1be11a-kube-api-access-p72hq\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.496733 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1576e-92c8-407b-b821-e0cbfe1be11a-config-data\") pod \"heat-api-7986669c9b-q8ghv\" (UID: \"08b1576e-92c8-407b-b821-e0cbfe1be11a\") " pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563806 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563885 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.563995 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.564193 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.568071 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.568542 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.576788 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.578077 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-config-data-custom\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.585011 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-internal-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.585113 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-public-tls-certs\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.586405 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb430a-807d-4e37-bc5a-9b4c75454427-combined-ca-bundle\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.587758 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85dwt\" (UniqueName: \"kubernetes.io/projected/94fb430a-807d-4e37-bc5a-9b4c75454427-kube-api-access-85dwt\") pod \"heat-cfnapi-65b6d6849b-79456\" (UID: \"94fb430a-807d-4e37-bc5a-9b4c75454427\") " pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.644267 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:26 crc kubenswrapper[4705]: I0216 15:15:26.694389 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.696752 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.696949 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h86h677h666h84h66fh648h59ch64fh7ch56dh5d7h5d5h699h75h5bfh644h6bh64dh564h5b6h55ch64h7dh676h66bh5f4h549h9fh5d4h5d9h596q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwnzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(4881941b-eb71-45be-aa51-0e8431b29e89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:15:26 crc kubenswrapper[4705]: E0216 15:15:26.698166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="4881941b-eb71-45be-aa51-0e8431b29e89" Feb 16 15:15:27 crc kubenswrapper[4705]: E0216 15:15:27.170325 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="4881941b-eb71-45be-aa51-0e8431b29e89" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.476570 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624748 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624801 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.624933 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.625069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.625137 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") pod \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\" (UID: \"b1b8bc91-daf7-4fa0-aad2-7d14527c2298\") " Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.626650 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.634593 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.654772 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk" (OuterVolumeSpecName: "kube-api-access-g4tkk") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "kube-api-access-g4tkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.655703 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts" (OuterVolumeSpecName: "scripts") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.766858 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767226 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767237 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.767246 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4tkk\" (UniqueName: \"kubernetes.io/projected/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-kube-api-access-g4tkk\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4705]: I0216 15:15:27.994578 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.000258 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.051637 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.073795 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data" (OuterVolumeSpecName: "config-data") pod "b1b8bc91-daf7-4fa0-aad2-7d14527c2298" (UID: "b1b8bc91-daf7-4fa0-aad2-7d14527c2298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078138 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078164 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.078175 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b8bc91-daf7-4fa0-aad2-7d14527c2298-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.153599 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"69bc6a88-b325-43bd-af4c-55283723a765","Type":"ContainerDied","Data":"50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7"} Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.153645 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50266016e66cb9551ee585a38ed927f03edf352443c8c419df65fdc099965ff7" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.184660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b1b8bc91-daf7-4fa0-aad2-7d14527c2298","Type":"ContainerDied","Data":"1f91f91f4ee1690f46dee7379d3b5f6f9664f4c57d16ad81e7ef1f99a61e9417"} Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.184743 4705 scope.go:117] "RemoveContainer" containerID="ec3ce9e162fe84497d1167a941a28f56f05bc9a6de835bb6906950d33e1b24de" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.185019 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.242275 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.460299 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.491608 4705 scope.go:117] "RemoveContainer" containerID="7cdc82c1f54346fbd4bdea38f1d1311837c08094d5d76a0e3ecc3bb36394f874" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.530896 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.554010 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568103 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568671 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568692 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568706 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568713 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568729 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568735 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568744 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568750 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: E0216 15:15:28.568763 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.568769 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569073 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api-log" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569089 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="sg-core" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569106 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="ceilometer-notification-agent" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569120 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bc6a88-b325-43bd-af4c-55283723a765" containerName="cinder-api" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.569135 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" containerName="proxy-httpd" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.571475 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.576152 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.576541 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600570 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600627 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600676 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600854 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.600932 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601034 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601061 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") pod \"69bc6a88-b325-43bd-af4c-55283723a765\" (UID: \"69bc6a88-b325-43bd-af4c-55283723a765\") " Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601103 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.601684 4705 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/69bc6a88-b325-43bd-af4c-55283723a765-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.605022 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs" (OuterVolumeSpecName: "logs") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.608165 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.640114 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw" (OuterVolumeSpecName: "kube-api-access-s2dbw") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "kube-api-access-s2dbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.642672 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.642716 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts" (OuterVolumeSpecName: "scripts") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.696204 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708744 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708803 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.708908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709035 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709073 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709182 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709196 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dbw\" (UniqueName: \"kubernetes.io/projected/69bc6a88-b325-43bd-af4c-55283723a765-kube-api-access-s2dbw\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709206 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709215 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.709222 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69bc6a88-b325-43bd-af4c-55283723a765-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.735609 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data" (OuterVolumeSpecName: "config-data") pod "69bc6a88-b325-43bd-af4c-55283723a765" (UID: "69bc6a88-b325-43bd-af4c-55283723a765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813068 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813573 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.813880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814225 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814353 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.814538 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.815084 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.815171 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69bc6a88-b325-43bd-af4c-55283723a765-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.826450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.833127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.857974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.859141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:28 crc kubenswrapper[4705]: I0216 15:15:28.865720 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"ceilometer-0\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " pod="openstack/ceilometer-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.000327 4705 scope.go:117] "RemoveContainer" containerID="9a7cdbca15bcb88834b38bafb18effcd247f1df4a482e11737dd84f2fd64e363" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.038872 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.223415 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerStarted","Data":"624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.223877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerStarted","Data":"0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.245340 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" podStartSLOduration=12.245318184 podStartE2EDuration="12.245318184s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:29.239182481 +0000 UTC m=+1323.424159547" watchObservedRunningTime="2026-02-16 15:15:29.245318184 +0000 UTC m=+1323.430295250" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.250607 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.255465 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b76884b7-g4c57" event={"ID":"811fab8b-dbb5-4985-b67f-d3671ea6ff9b","Type":"ContainerStarted","Data":"2de6afc52b4fb109681f7676f68a992bbdf998d962c01b0b50469249fc69a1c3"} Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.256193 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.256416 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.282496 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.307599 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85b76884b7-g4c57" podStartSLOduration=16.307577685 podStartE2EDuration="16.307577685s" podCreationTimestamp="2026-02-16 15:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:29.287550512 +0000 UTC m=+1323.472527598" watchObservedRunningTime="2026-02-16 15:15:29.307577685 +0000 UTC m=+1323.492554761" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.365079 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.388925 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.413456 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.436857 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.445303 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.445605 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.484191 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.578044 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.585773 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593457 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593554 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593843 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593940 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.593974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.594002 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.594213 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698005 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698451 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698485 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698571 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698607 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698630 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698648 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.698719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.699239 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d09b351a-8da4-4f00-8847-f3461478179f-logs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.701455 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d09b351a-8da4-4f00-8847-f3461478179f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.710288 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.715303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.727753 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.728414 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-scripts\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.731681 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wpnm\" (UniqueName: \"kubernetes.io/projected/d09b351a-8da4-4f00-8847-f3461478179f-kube-api-access-2wpnm\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.747210 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.771283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d09b351a-8da4-4f00-8847-f3461478179f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d09b351a-8da4-4f00-8847-f3461478179f\") " pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.817261 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.901162 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.981681 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:15:29 crc kubenswrapper[4705]: I0216 15:15:29.983060 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.009328 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.035285 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.061099 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.273067 4705 generic.go:334] "Generic (PLEG): container finished" podID="38af35f6-7590-41c4-9442-ec89fe02106f" containerID="624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce" exitCode=0 Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.273205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerDied","Data":"624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.275153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerStarted","Data":"0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.280159 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerStarted","Data":"07359fe7b9cf7c5f1d493c117441af14d97a55cf8e7e896736d82451018cdca8"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.283889 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerStarted","Data":"0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.288760 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerStarted","Data":"c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1"} Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.296526 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.483518 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bc6a88-b325-43bd-af4c-55283723a765" path="/var/lib/kubelet/pods/69bc6a88-b325-43bd-af4c-55283723a765/volumes" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.485164 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b8bc91-daf7-4fa0-aad2-7d14527c2298" path="/var/lib/kubelet/pods/b1b8bc91-daf7-4fa0-aad2-7d14527c2298/volumes" Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.733154 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.788718 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:30 crc kubenswrapper[4705]: W0216 15:15:30.806588 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951d407e_26bd_442f_8519_61650a9a3e70.slice/crio-5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34 WatchSource:0}: Error finding container 5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34: Status 404 returned error can't find the container with id 5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34 Feb 16 15:15:30 crc kubenswrapper[4705]: I0216 15:15:30.919330 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7bf99b56-hm6dc"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.111234 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.154153 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7986669c9b-q8ghv"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.173174 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.189283 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.219428 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.222006 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.240385 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b6d6849b-79456"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.258191 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:31 crc kubenswrapper[4705]: W0216 15:15:31.287026 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6760289c_b8a9_45ed_bbab_3d5d5ca1db17.slice/crio-3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54 WatchSource:0}: Error finding container 3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54: Status 404 returned error can't find the container with id 3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54 Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.380276 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.397694 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerStarted","Data":"b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.433799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b6d6849b-79456" event={"ID":"94fb430a-807d-4e37-bc5a-9b4c75454427","Type":"ContainerStarted","Data":"2cdb5dc66bc2ee90d7d5c23d3d6d7ca813c990a20ae4c3e03b2ace84b86330ed"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.438221 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" podStartSLOduration=14.438192233 podStartE2EDuration="14.438192233s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:31.421538105 +0000 UTC m=+1325.606515191" watchObservedRunningTime="2026-02-16 15:15:31.438192233 +0000 UTC m=+1325.623169309" Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.442843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7bf99b56-hm6dc" event={"ID":"ada71f46-f923-4974-9776-ed92f20c79b1","Type":"ContainerStarted","Data":"67ef71715a23c912ae4fd99be1d097d8e372f553b92cd63cf628172082ac6f24"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.445269 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerStarted","Data":"6acd1944658746507adf3b4af992bae06e651f8bf8b1f5ec60b84795bec2d1f1"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.455573 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerStarted","Data":"a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.457853 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerStarted","Data":"446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.460418 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerStarted","Data":"5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.475440 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerStarted","Data":"fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.484193 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7986669c9b-q8ghv" event={"ID":"08b1576e-92c8-407b-b821-e0cbfe1be11a","Type":"ContainerStarted","Data":"e5055a703a591476917dfc6fbbf1aef43e5b8b8aba57c1130df721992b50defe"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.486263 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerStarted","Data":"00f8e5fe522e813566a78b6896b44d2c17e83898b0bbb39385052b0a457034e8"} Feb 16 15:15:31 crc kubenswrapper[4705]: W0216 15:15:31.487917 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09b351a_8da4_4f00_8847_f3461478179f.slice/crio-7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e WatchSource:0}: Error finding container 7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e: Status 404 returned error can't find the container with id 7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.488528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerStarted","Data":"75b8ea33afa2dc74710b8197cd60788f65dd6c58802ff69550dde775ef900e97"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.525950 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerStarted","Data":"e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.555051 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerStarted","Data":"7eed159df357b814d8fe77b30f4e632478a311f8b770660151ac4fae245b6428"} Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.687586 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:15:31 crc kubenswrapper[4705]: I0216 15:15:31.687658 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.597557 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerStarted","Data":"5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.609010 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerStarted","Data":"8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.612238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.674416 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-mqnvt" podStartSLOduration=16.674387679 podStartE2EDuration="16.674387679s" podCreationTimestamp="2026-02-16 15:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.63780355 +0000 UTC m=+1326.822780626" watchObservedRunningTime="2026-02-16 15:15:32.674387679 +0000 UTC m=+1326.859364845" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.682554 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7bf99b56-hm6dc" event={"ID":"ada71f46-f923-4974-9776-ed92f20c79b1","Type":"ContainerStarted","Data":"96d51637e1959c093b9c48d9015dbec840c3b58132f3e7055cb5c1b21ca999c1"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.685396 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.689697 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" event={"ID":"38af35f6-7590-41c4-9442-ec89fe02106f","Type":"ContainerDied","Data":"0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.689740 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e7a2938061cf0203a20d61b57343936ee25d4ec52176b148cecec41e59f82c7" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.690620 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-x6wr8" podStartSLOduration=16.690595885 podStartE2EDuration="16.690595885s" podCreationTimestamp="2026-02-16 15:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.656967229 +0000 UTC m=+1326.841944305" watchObservedRunningTime="2026-02-16 15:15:32.690595885 +0000 UTC m=+1326.875572961" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.707752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"7619464f942a7231be64ec173422c90b46158713732b1adce994e1174790ed2e"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.736467 4705 generic.go:334] "Generic (PLEG): container finished" podID="3c6fc941-1576-4817-859a-6644349bc8cd" containerID="e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.736579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerDied","Data":"e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.762521 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b7bf99b56-hm6dc" podStartSLOduration=8.762467896 podStartE2EDuration="8.762467896s" podCreationTimestamp="2026-02-16 15:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:32.719521428 +0000 UTC m=+1326.904498504" watchObservedRunningTime="2026-02-16 15:15:32.762467896 +0000 UTC m=+1326.947444972" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.773683 4705 generic.go:334] "Generic (PLEG): container finished" podID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerID="b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.774082 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerDied","Data":"b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.809857 4705 generic.go:334] "Generic (PLEG): container finished" podID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerID="fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6" exitCode=0 Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.809916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerDied","Data":"fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6"} Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.825130 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.974584 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") pod \"38af35f6-7590-41c4-9442-ec89fe02106f\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.975526 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") pod \"38af35f6-7590-41c4-9442-ec89fe02106f\" (UID: \"38af35f6-7590-41c4-9442-ec89fe02106f\") " Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.976445 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38af35f6-7590-41c4-9442-ec89fe02106f" (UID: "38af35f6-7590-41c4-9442-ec89fe02106f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.983481 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38af35f6-7590-41c4-9442-ec89fe02106f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:32 crc kubenswrapper[4705]: I0216 15:15:32.998285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv" (OuterVolumeSpecName: "kube-api-access-84rlv") pod "38af35f6-7590-41c4-9442-ec89fe02106f" (UID: "38af35f6-7590-41c4-9442-ec89fe02106f"). InnerVolumeSpecName "kube-api-access-84rlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.086731 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84rlv\" (UniqueName: \"kubernetes.io/projected/38af35f6-7590-41c4-9442-ec89fe02106f-kube-api-access-84rlv\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.550512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.579002 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615534 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") pod \"3c6fc941-1576-4817-859a-6644349bc8cd\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615617 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") pod \"c18d067a-2ef1-4b11-936f-aef7f7910a80\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615683 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") pod \"3c6fc941-1576-4817-859a-6644349bc8cd\" (UID: \"3c6fc941-1576-4817-859a-6644349bc8cd\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.615982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") pod \"c18d067a-2ef1-4b11-936f-aef7f7910a80\" (UID: \"c18d067a-2ef1-4b11-936f-aef7f7910a80\") " Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.618869 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c18d067a-2ef1-4b11-936f-aef7f7910a80" (UID: "c18d067a-2ef1-4b11-936f-aef7f7910a80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.619192 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c6fc941-1576-4817-859a-6644349bc8cd" (UID: "3c6fc941-1576-4817-859a-6644349bc8cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.621279 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18d067a-2ef1-4b11-936f-aef7f7910a80-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.621330 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c6fc941-1576-4817-859a-6644349bc8cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.625926 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf" (OuterVolumeSpecName: "kube-api-access-k5swf") pod "3c6fc941-1576-4817-859a-6644349bc8cd" (UID: "3c6fc941-1576-4817-859a-6644349bc8cd"). InnerVolumeSpecName "kube-api-access-k5swf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.625987 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q" (OuterVolumeSpecName: "kube-api-access-qzb7q") pod "c18d067a-2ef1-4b11-936f-aef7f7910a80" (UID: "c18d067a-2ef1-4b11-936f-aef7f7910a80"). InnerVolumeSpecName "kube-api-access-qzb7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.725511 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5swf\" (UniqueName: \"kubernetes.io/projected/3c6fc941-1576-4817-859a-6644349bc8cd-kube-api-access-k5swf\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.726662 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzb7q\" (UniqueName: \"kubernetes.io/projected/c18d067a-2ef1-4b11-936f-aef7f7910a80-kube-api-access-qzb7q\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.882260 4705 generic.go:334] "Generic (PLEG): container finished" podID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerID="5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.882703 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerDied","Data":"5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.903711 4705 generic.go:334] "Generic (PLEG): container finished" podID="8b468686-b5ab-423d-a720-a2c77aed457f" containerID="8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.903789 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerDied","Data":"8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909529 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" event={"ID":"c18d067a-2ef1-4b11-936f-aef7f7910a80","Type":"ContainerDied","Data":"c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909565 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c83e041ccbe28cc109471010621433baa4da8a5725021b1b9a2d4ab402d027a1" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.909636 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2d9b-account-create-update-wlxl6" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.918060 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerStarted","Data":"5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.918498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.931563 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.935448 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"02a5e2d5d7e31d67b1bb7a3cbeb3d323b8ed9573be1a3ec02ee106bca3ba399c"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.946766 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerID="af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20" exitCode=0 Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.946900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.959705 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nsdt" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.959945 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-de3f-account-create-update-d2gp8" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.969747 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nsdt" event={"ID":"3c6fc941-1576-4817-859a-6644349bc8cd","Type":"ContainerDied","Data":"0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd"} Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.969794 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0402e46f35c154212ec7419bd6c3fec74c389a550cdcd2bb465f45223c5e91dd" Feb 16 15:15:33 crc kubenswrapper[4705]: I0216 15:15:33.967738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b7cc9557b-77tq2" podStartSLOduration=16.967716781 podStartE2EDuration="16.967716781s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:33.946972258 +0000 UTC m=+1328.131949324" watchObservedRunningTime="2026-02-16 15:15:33.967716781 +0000 UTC m=+1328.152693857" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.178867 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.541340 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b76884b7-g4c57" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.781263 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.783024 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6599894f76-dcwz8" Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.875458 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.876209 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-565b84d684-sh8jq" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" containerID="cri-o://72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4705]: I0216 15:15:34.876489 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-565b84d684-sh8jq" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" containerID="cri-o://0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" gracePeriod=30 Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.043793 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d09b351a-8da4-4f00-8847-f3461478179f","Type":"ContainerStarted","Data":"d6362dbef86cbcd19bf87413815374f6225932e1d1a905780b0f3d66245836a1"} Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.043945 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:15:35 crc kubenswrapper[4705]: I0216 15:15:35.082393 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.082350787 podStartE2EDuration="6.082350787s" podCreationTimestamp="2026-02-16 15:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:35.069338081 +0000 UTC m=+1329.254315157" watchObservedRunningTime="2026-02-16 15:15:35.082350787 +0000 UTC m=+1329.267327863" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.048005 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.050656 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.052025 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066340 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" event={"ID":"6a0302cb-f7dd-46d4-8df0-2ab25bddec10","Type":"ContainerDied","Data":"0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066426 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0643e28f6cc16efbe3ba6a7f835bd85812e2fcc0857d0dda9b56690a6a620d51" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.066512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba40-account-create-update-8d7bg" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.074265 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mqnvt" event={"ID":"7b2a0a9c-1379-457e-a5e2-537304cfdcff","Type":"ContainerDied","Data":"a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.074323 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ece6223ece92829877ec6c63ae433c7500a1bc896b69d0f284e3fd6afc7cb7" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.075854 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mqnvt" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.106310 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x6wr8" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.107407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x6wr8" event={"ID":"8b468686-b5ab-423d-a720-a2c77aed457f","Type":"ContainerDied","Data":"446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.107515 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446fae71056dbf1b7f079bba077645c2c99e95a68610a5722ab512a3cf936661" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.140512 4705 generic.go:334] "Generic (PLEG): container finished" podID="8486800f-2aec-490d-a174-e05a0fa27a62" containerID="0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" exitCode=143 Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.140655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.148182 4705 generic.go:334] "Generic (PLEG): container finished" podID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerID="b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" exitCode=0 Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.148393 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe"} Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.169295 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") pod \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.169999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") pod \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.170237 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") pod \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\" (UID: \"6a0302cb-f7dd-46d4-8df0-2ab25bddec10\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.170439 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") pod \"8b468686-b5ab-423d-a720-a2c77aed457f\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.176723 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") pod \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\" (UID: \"7b2a0a9c-1379-457e-a5e2-537304cfdcff\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.176906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") pod \"8b468686-b5ab-423d-a720-a2c77aed457f\" (UID: \"8b468686-b5ab-423d-a720-a2c77aed457f\") " Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.171163 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a0302cb-f7dd-46d4-8df0-2ab25bddec10" (UID: "6a0302cb-f7dd-46d4-8df0-2ab25bddec10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.171197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b2a0a9c-1379-457e-a5e2-537304cfdcff" (UID: "7b2a0a9c-1379-457e-a5e2-537304cfdcff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.172630 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b468686-b5ab-423d-a720-a2c77aed457f" (UID: "8b468686-b5ab-423d-a720-a2c77aed457f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.181596 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9" (OuterVolumeSpecName: "kube-api-access-72tq9") pod "6a0302cb-f7dd-46d4-8df0-2ab25bddec10" (UID: "6a0302cb-f7dd-46d4-8df0-2ab25bddec10"). InnerVolumeSpecName "kube-api-access-72tq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185462 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b468686-b5ab-423d-a720-a2c77aed457f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185491 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b2a0a9c-1379-457e-a5e2-537304cfdcff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185503 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72tq9\" (UniqueName: \"kubernetes.io/projected/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-kube-api-access-72tq9\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.185542 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a0302cb-f7dd-46d4-8df0-2ab25bddec10-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.198063 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n" (OuterVolumeSpecName: "kube-api-access-b5d9n") pod "8b468686-b5ab-423d-a720-a2c77aed457f" (UID: "8b468686-b5ab-423d-a720-a2c77aed457f"). InnerVolumeSpecName "kube-api-access-b5d9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.200883 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm" (OuterVolumeSpecName: "kube-api-access-r85gm") pod "7b2a0a9c-1379-457e-a5e2-537304cfdcff" (UID: "7b2a0a9c-1379-457e-a5e2-537304cfdcff"). InnerVolumeSpecName "kube-api-access-r85gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.288330 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85gm\" (UniqueName: \"kubernetes.io/projected/7b2a0a9c-1379-457e-a5e2-537304cfdcff-kube-api-access-r85gm\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:36 crc kubenswrapper[4705]: I0216 15:15:36.291016 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5d9n\" (UniqueName: \"kubernetes.io/projected/8b468686-b5ab-423d-a720-a2c77aed457f-kube-api-access-b5d9n\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.204255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d799457-fvqj6" event={"ID":"f5639f9d-2d22-47cb-b481-10e88dc7f90f","Type":"ContainerDied","Data":"a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b"} Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.204967 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17cb528fd15d9b834f48b96c4bb4360b196f8701d2fb7e23b61e3237bd4f97b" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.263446 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.452593 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.453091 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.456690 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.456931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.457921 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.457975 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.458022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") pod \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\" (UID: \"f5639f9d-2d22-47cb-b481-10e88dc7f90f\") " Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.579549 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq" (OuterVolumeSpecName: "kube-api-access-hdqbq") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "kube-api-access-hdqbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.670201 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqbq\" (UniqueName: \"kubernetes.io/projected/f5639f9d-2d22-47cb-b481-10e88dc7f90f-kube-api-access-hdqbq\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.795827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config" (OuterVolumeSpecName: "config") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.874386 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.882995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:37 crc kubenswrapper[4705]: I0216 15:15:37.976038 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.221241 4705 generic.go:334] "Generic (PLEG): container finished" podID="8486800f-2aec-490d-a174-e05a0fa27a62" containerID="72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" exitCode=0 Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.224822 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d799457-fvqj6" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.222408 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79"} Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.381651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.389113 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.400606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.406995 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.498104 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.498149 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.508545 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f5639f9d-2d22-47cb-b481-10e88dc7f90f" (UID: "f5639f9d-2d22-47cb-b481-10e88dc7f90f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.607630 4705 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5639f9d-2d22-47cb-b481-10e88dc7f90f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.697145 4705 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5639f9d_2d22_47cb_b481_10e88dc7f90f.slice/cgroup.controllers: no such file or directory" Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.697237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.775420 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:38 crc kubenswrapper[4705]: I0216 15:15:38.799970 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75d799457-fvqj6"] Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.276916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7986669c9b-q8ghv" event={"ID":"08b1576e-92c8-407b-b821-e0cbfe1be11a","Type":"ContainerStarted","Data":"9d643e2db80bd365b8f950c7dece546e6ce638bc7851f64c39e67ef4e3b8f204"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.278493 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.287255 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.297779 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-565b84d684-sh8jq" event={"ID":"8486800f-2aec-490d-a174-e05a0fa27a62","Type":"ContainerDied","Data":"abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.297843 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe1e154dc793291fe4a1e1361bdea85c411201d08d5b6df947af6208be90837" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.308204 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7986669c9b-q8ghv" podStartSLOduration=7.27932802 podStartE2EDuration="13.308173429s" podCreationTimestamp="2026-02-16 15:15:26 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.125630893 +0000 UTC m=+1325.310607959" lastFinishedPulling="2026-02-16 15:15:37.154476292 +0000 UTC m=+1331.339453368" observedRunningTime="2026-02-16 15:15:39.302040207 +0000 UTC m=+1333.487017283" watchObservedRunningTime="2026-02-16 15:15:39.308173429 +0000 UTC m=+1333.493150505" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.317560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerStarted","Data":"895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.317788 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" containerID="cri-o://895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" gracePeriod=60 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.318122 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.329680 4705 generic.go:334] "Generic (PLEG): container finished" podID="951d407e-26bd-442f-8519-61650a9a3e70" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" exitCode=1 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.329803 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.330815 4705 scope.go:117] "RemoveContainer" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.357334 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" podStartSLOduration=15.305298969 podStartE2EDuration="22.357307821s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="2026-02-16 15:15:29.97164389 +0000 UTC m=+1324.156620966" lastFinishedPulling="2026-02-16 15:15:37.023652752 +0000 UTC m=+1331.208629818" observedRunningTime="2026-02-16 15:15:39.356720744 +0000 UTC m=+1333.541697820" watchObservedRunningTime="2026-02-16 15:15:39.357307821 +0000 UTC m=+1333.542284897" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.358427 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerStarted","Data":"2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.359809 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.399649 4705 generic.go:334] "Generic (PLEG): container finished" podID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" exitCode=1 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.400330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.400873 4705 scope.go:117] "RemoveContainer" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.417653 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b6d6849b-79456" event={"ID":"94fb430a-807d-4e37-bc5a-9b4c75454427","Type":"ContainerStarted","Data":"25b842f93c5831708a88045242a40db722a4a7e440b2718e345b48d4f563a393"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.420045 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.435621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerStarted","Data":"6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60"} Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.435963 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-656d9cf494-c6m8t" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" containerID="cri-o://6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" gracePeriod=60 Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.436067 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.456091 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" podStartSLOduration=22.456067198 podStartE2EDuration="22.456067198s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:39.429313306 +0000 UTC m=+1333.614290382" watchObservedRunningTime="2026-02-16 15:15:39.456067198 +0000 UTC m=+1333.641044274" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.574665 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65b6d6849b-79456" podStartSLOduration=7.68288107 podStartE2EDuration="13.574641803s" podCreationTimestamp="2026-02-16 15:15:26 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.262026419 +0000 UTC m=+1325.447003495" lastFinishedPulling="2026-02-16 15:15:37.153787152 +0000 UTC m=+1331.338764228" observedRunningTime="2026-02-16 15:15:39.550242447 +0000 UTC m=+1333.735219523" watchObservedRunningTime="2026-02-16 15:15:39.574641803 +0000 UTC m=+1333.759618869" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.616603 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-656d9cf494-c6m8t" podStartSLOduration=16.499687728 podStartE2EDuration="22.616572092s" podCreationTimestamp="2026-02-16 15:15:17 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.025565619 +0000 UTC m=+1325.210542685" lastFinishedPulling="2026-02-16 15:15:37.142449973 +0000 UTC m=+1331.327427049" observedRunningTime="2026-02-16 15:15:39.589600334 +0000 UTC m=+1333.774577420" watchObservedRunningTime="2026-02-16 15:15:39.616572092 +0000 UTC m=+1333.801549158" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.652048 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760693 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760843 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.760924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761125 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761163 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761221 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.761345 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") pod \"8486800f-2aec-490d-a174-e05a0fa27a62\" (UID: \"8486800f-2aec-490d-a174-e05a0fa27a62\") " Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.763100 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs" (OuterVolumeSpecName: "logs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.775583 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726" (OuterVolumeSpecName: "kube-api-access-58726") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "kube-api-access-58726". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.780535 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts" (OuterVolumeSpecName: "scripts") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865302 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8486800f-2aec-490d-a174-e05a0fa27a62-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865335 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58726\" (UniqueName: \"kubernetes.io/projected/8486800f-2aec-490d-a174-e05a0fa27a62-kube-api-access-58726\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4705]: I0216 15:15:39.865349 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.015519 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.015597 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.166199 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.166274 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.415449 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data" (OuterVolumeSpecName: "config-data") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.444478 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" path="/var/lib/kubelet/pods/f5639f9d-2d22-47cb-b481-10e88dc7f90f/volumes" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.467467 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-565b84d684-sh8jq" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.507420 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.530286 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.612394 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.691557 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.717085 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.841797 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5"} Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.916761 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8486800f-2aec-490d-a174-e05a0fa27a62" (UID: "8486800f-2aec-490d-a174-e05a0fa27a62"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:40 crc kubenswrapper[4705]: I0216 15:15:40.948004 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8486800f-2aec-490d-a174-e05a0fa27a62-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.179317 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.214700 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-565b84d684-sh8jq"] Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.518046 4705 generic.go:334] "Generic (PLEG): container finished" podID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerID="895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" exitCode=0 Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.518180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerDied","Data":"895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.547456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerStarted","Data":"b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.549273 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.587819 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerStarted","Data":"ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.588831 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:41 crc kubenswrapper[4705]: E0216 15:15:41.589138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.597049 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podStartSLOduration=11.430081625 podStartE2EDuration="17.597027168s" podCreationTimestamp="2026-02-16 15:15:24 +0000 UTC" firstStartedPulling="2026-02-16 15:15:30.984618557 +0000 UTC m=+1325.169595633" lastFinishedPulling="2026-02-16 15:15:37.1515641 +0000 UTC m=+1331.336541176" observedRunningTime="2026-02-16 15:15:41.581428649 +0000 UTC m=+1335.766405725" watchObservedRunningTime="2026-02-16 15:15:41.597027168 +0000 UTC m=+1335.782004244" Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.651758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4881941b-eb71-45be-aa51-0e8431b29e89","Type":"ContainerStarted","Data":"d7b0a7eaf9b72e98b057d054d77c8d71885c3f7b2e49f0439793a568ebfdd2d8"} Feb 16 15:15:41 crc kubenswrapper[4705]: I0216 15:15:41.698644 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.907486286 podStartE2EDuration="33.698622355s" podCreationTimestamp="2026-02-16 15:15:08 +0000 UTC" firstStartedPulling="2026-02-16 15:15:09.178429717 +0000 UTC m=+1303.363406803" lastFinishedPulling="2026-02-16 15:15:38.969565796 +0000 UTC m=+1333.154542872" observedRunningTime="2026-02-16 15:15:41.690206008 +0000 UTC m=+1335.875183094" watchObservedRunningTime="2026-02-16 15:15:41.698622355 +0000 UTC m=+1335.883599421" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.064667 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237433 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237562 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.237753 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.238147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") pod \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\" (UID: \"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca\") " Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.249717 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc" (OuterVolumeSpecName: "kube-api-access-9tvdc") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "kube-api-access-9tvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.251526 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.342546 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tvdc\" (UniqueName: \"kubernetes.io/projected/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-kube-api-access-9tvdc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.342581 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.353514 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data" (OuterVolumeSpecName: "config-data") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.365336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" (UID: "ed97b1a5-c93e-44ce-b210-5975fa6ec6ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.432168 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" path="/var/lib/kubelet/pods/8486800f-2aec-490d-a174-e05a0fa27a62/volumes" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.447233 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.447266 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.530581 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533022 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533542 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533666 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533748 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.533851 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.533935 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534026 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534111 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534220 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534320 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534424 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534508 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534597 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.534852 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.534939 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535039 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535125 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535207 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535300 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535410 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.535524 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.535607 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536025 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-httpd" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536114 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" containerName="heat-cfnapi" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536203 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536295 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5639f9d-2d22-47cb-b481-10e88dc7f90f" containerName="neutron-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536402 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536624 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-api" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536709 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536800 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" containerName="mariadb-database-create" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536882 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8486800f-2aec-490d-a174-e05a0fa27a62" containerName="placement-log" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.536973 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" containerName="mariadb-account-create-update" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.538326 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.543858 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.544123 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.544384 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.552496 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553354 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553443 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.553728 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.663985 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664060 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664105 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.664273 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.672899 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.677576 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.679754 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680271 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" event={"ID":"ed97b1a5-c93e-44ce-b210-5975fa6ec6ca","Type":"ContainerDied","Data":"07359fe7b9cf7c5f1d493c117441af14d97a55cf8e7e896736d82451018cdca8"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680334 4705 scope.go:117] "RemoveContainer" containerID="895e903ae6dba5468c8cb77af001de11f0118579acced7988a77fc91e50c6926" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.680505 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d4846c7f-r8fqk" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.696568 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"nova-cell0-conductor-db-sync-sz8ws\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.700096 4705 generic.go:334] "Generic (PLEG): container finished" podID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" exitCode=1 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.700762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.701791 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.702095 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.704982 4705 generic.go:334] "Generic (PLEG): container finished" podID="951d407e-26bd-442f-8519-61650a9a3e70" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" exitCode=1 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.705061 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.706861 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:42 crc kubenswrapper[4705]: E0216 15:15:42.707160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748008 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerStarted","Data":"dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3"} Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748228 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" containerID="cri-o://45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748576 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748634 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" containerID="cri-o://dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748696 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" containerID="cri-o://e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.748746 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" containerID="cri-o://a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" gracePeriod=30 Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.800285 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.354199884 podStartE2EDuration="14.800259306s" podCreationTimestamp="2026-02-16 15:15:28 +0000 UTC" firstStartedPulling="2026-02-16 15:15:31.300523982 +0000 UTC m=+1325.485501048" lastFinishedPulling="2026-02-16 15:15:41.746583394 +0000 UTC m=+1335.931560470" observedRunningTime="2026-02-16 15:15:42.792362584 +0000 UTC m=+1336.977339660" watchObservedRunningTime="2026-02-16 15:15:42.800259306 +0000 UTC m=+1336.985236382" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.860012 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.871397 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.879480 4705 scope.go:117] "RemoveContainer" containerID="9c8ecf1fe795367a88d6a0cb380949afee410f8cb00e746e4df71c7687d69924" Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.883541 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57d4846c7f-r8fqk"] Feb 16 15:15:42 crc kubenswrapper[4705]: I0216 15:15:42.977666 4705 scope.go:117] "RemoveContainer" containerID="825fae9ff1f73721a415051822f8800d35104abf442acc8f65b15cdad2567831" Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.653121 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763413 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" exitCode=0 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763448 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" exitCode=2 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763456 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" exitCode=0 Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763493 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763545 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.763559 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.765123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerStarted","Data":"f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409"} Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.769943 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:43 crc kubenswrapper[4705]: E0216 15:15:43.770451 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:43 crc kubenswrapper[4705]: I0216 15:15:43.771543 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:43 crc kubenswrapper[4705]: E0216 15:15:43.771816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.443508 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed97b1a5-c93e-44ce-b210-5975fa6ec6ca" path="/var/lib/kubelet/pods/ed97b1a5-c93e-44ce-b210-5975fa6ec6ca/volumes" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.828522 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d09b351a-8da4-4f00-8847-f3461478179f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.226:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.854718 4705 generic.go:334] "Generic (PLEG): container finished" podID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerID="45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" exitCode=0 Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.854772 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109"} Feb 16 15:15:44 crc kubenswrapper[4705]: I0216 15:15:44.885653 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b7bf99b56-hm6dc" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.016604 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.017791 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.018147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7cfb944475-hpwlf_openstack(59b661f8-8d2f-45db-ab8d-cd6436cec8eb)\"" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.021626 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.021867 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" containerID="cri-o://5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" gracePeriod=60 Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.064816 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.066705 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.087900 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.087972 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.165835 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.166941 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.167195 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.168747 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.562217 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679752 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679802 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.679935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.680023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.680172 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") pod \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\" (UID: \"6760289c-b8a9-45ed-bbab-3d5d5ca1db17\") " Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.681025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.681233 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.690622 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts" (OuterVolumeSpecName: "scripts") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.690706 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v" (OuterVolumeSpecName: "kube-api-access-dbr2v") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "kube-api-access-dbr2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.767620 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783673 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783709 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783720 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783732 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbr2v\" (UniqueName: \"kubernetes.io/projected/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-kube-api-access-dbr2v\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.783744 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.871200 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data" (OuterVolumeSpecName: "config-data") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.884855 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6760289c-b8a9-45ed-bbab-3d5d5ca1db17" (UID: "6760289c-b8a9-45ed-bbab-3d5d5ca1db17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.885469 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.886204 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6760289c-b8a9-45ed-bbab-3d5d5ca1db17","Type":"ContainerDied","Data":"3d27a22eae577ba6a17893a80486afe6063753a252d79954100c810c383ebd54"} Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.886249 4705 scope.go:117] "RemoveContainer" containerID="dbd3ad9240e471658a38c3db261ddd93df9920dad9c4a78850322029c86956f3" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.887817 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.890074 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.890109 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6760289c-b8a9-45ed-bbab-3d5d5ca1db17-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:45 crc kubenswrapper[4705]: E0216 15:15:45.890078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-74b44f99fd-mnr7j_openstack(951d407e-26bd-442f-8519-61650a9a3e70)\"" pod="openstack/heat-api-74b44f99fd-mnr7j" podUID="951d407e-26bd-442f-8519-61650a9a3e70" Feb 16 15:15:45 crc kubenswrapper[4705]: I0216 15:15:45.997971 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.016401 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.022843 4705 scope.go:117] "RemoveContainer" containerID="e5ca78d36c89afe7912538d074635940c19ba97231025aab7b0bf2b985e4e9e5" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034738 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034756 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034790 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034808 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034818 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: E0216 15:15:46.034831 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.034837 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035065 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-central-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035088 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="proxy-httpd" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035103 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="ceilometer-notification-agent" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.035120 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" containerName="sg-core" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.037227 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.040216 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.040500 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.051203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.091801 4705 scope.go:117] "RemoveContainer" containerID="a3027d6ce2c88d56b91a5ce2c8c6cdb2a41063ad421265e6712a552c39c4169b" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.126079 4705 scope.go:117] "RemoveContainer" containerID="45e1cfe174fbfd539db083ce6e61bc31bfbcfd037aceb30b23c951bd659a7109" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206208 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.206817 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.207158 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.207217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311295 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311344 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311393 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311447 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311558 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.311580 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.312941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.314210 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.317973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.319202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.321736 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.329496 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.337069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.356633 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.438832 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6760289c-b8a9-45ed-bbab-3d5d5ca1db17" path="/var/lib/kubelet/pods/6760289c-b8a9-45ed-bbab-3d5d5ca1db17/volumes" Feb 16 15:15:46 crc kubenswrapper[4705]: I0216 15:15:46.952929 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.867700 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.872940 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.878757 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:47 crc kubenswrapper[4705]: E0216 15:15:47.878804 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.930776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.931176 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"0b27ef6e89e2cf0aaae157a2376b147fb79e694acf057c4989514c1f299a5941"} Feb 16 15:15:47 crc kubenswrapper[4705]: I0216 15:15:47.933263 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.029427 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.029837 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" containerID="cri-o://e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" gracePeriod=10 Feb 16 15:15:48 crc kubenswrapper[4705]: I0216 15:15:48.405864 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.068011 4705 generic.go:334] "Generic (PLEG): container finished" podID="541411df-f636-4dab-a4e2-2ecc8933f236" containerID="e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" exitCode=0 Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.068075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1"} Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.123358 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.303945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304288 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304324 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304428 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.304678 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") pod \"541411df-f636-4dab-a4e2-2ecc8933f236\" (UID: \"541411df-f636-4dab-a4e2-2ecc8933f236\") " Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.322390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv" (OuterVolumeSpecName: "kube-api-access-fkbhv") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "kube-api-access-fkbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.391215 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.407849 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.407892 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbhv\" (UniqueName: \"kubernetes.io/projected/541411df-f636-4dab-a4e2-2ecc8933f236-kube-api-access-fkbhv\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.446056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.450429 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.503478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.503489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config" (OuterVolumeSpecName: "config") pod "541411df-f636-4dab-a4e2-2ecc8933f236" (UID: "541411df-f636-4dab-a4e2-2ecc8933f236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512566 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512595 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512606 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:49 crc kubenswrapper[4705]: I0216 15:15:49.512615 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/541411df-f636-4dab-a4e2-2ecc8933f236-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.080833 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65b6d6849b-79456" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.091804 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093260 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" event={"ID":"541411df-f636-4dab-a4e2-2ecc8933f236","Type":"ContainerDied","Data":"4cd8d63ef6157fd647119bfab51e4fd5281201daf21b70697f5351220cfe9c1c"} Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093299 4705 scope.go:117] "RemoveContainer" containerID="e0319e97509f4edfb41168b6ddd4f0b12f375b7360c62104003abe78576492a1" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.093489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-bkmwk" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.167005 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.183241 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.201649 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-bkmwk"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.259699 4705 scope.go:117] "RemoveContainer" containerID="40437351e7b265646ad6bf7b8802bcd81622e7977bf5739847bd739b6a21b1a3" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.380106 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7986669c9b-q8ghv" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.456510 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" path="/var/lib/kubelet/pods/541411df-f636-4dab-a4e2-2ecc8933f236/volumes" Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.457325 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:50 crc kubenswrapper[4705]: I0216 15:15:50.898877 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.054434 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055065 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055168 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.055320 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") pod \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\" (UID: \"59b661f8-8d2f-45db-ab8d-cd6436cec8eb\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.070025 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4" (OuterVolumeSpecName: "kube-api-access-265p4") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "kube-api-access-265p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.072197 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.127860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161003 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" event={"ID":"59b661f8-8d2f-45db-ab8d-cd6436cec8eb","Type":"ContainerDied","Data":"7eed159df357b814d8fe77b30f4e632478a311f8b770660151ac4fae245b6428"} Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161344 4705 scope.go:117] "RemoveContainer" containerID="b70a6384d9023291fc8604cdf5c0cc42d2506102d749e5ca54e8bec1243195f1" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.161499 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cfb944475-hpwlf" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162051 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162077 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-265p4\" (UniqueName: \"kubernetes.io/projected/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-kube-api-access-265p4\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.162089 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.193579 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.235255 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.276838 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data" (OuterVolumeSpecName: "config-data") pod "59b661f8-8d2f-45db-ab8d-cd6436cec8eb" (UID: "59b661f8-8d2f-45db-ab8d-cd6436cec8eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.378780 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.378965 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.379149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.379499 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") pod \"951d407e-26bd-442f-8519-61650a9a3e70\" (UID: \"951d407e-26bd-442f-8519-61650a9a3e70\") " Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.380898 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b661f8-8d2f-45db-ab8d-cd6436cec8eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.413574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.423333 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.423860 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2" (OuterVolumeSpecName: "kube-api-access-xwfs2") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "kube-api-access-xwfs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.485472 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.485806 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfs2\" (UniqueName: \"kubernetes.io/projected/951d407e-26bd-442f-8519-61650a9a3e70-kube-api-access-xwfs2\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.509737 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.546512 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data" (OuterVolumeSpecName: "config-data") pod "951d407e-26bd-442f-8519-61650a9a3e70" (UID: "951d407e-26bd-442f-8519-61650a9a3e70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.560414 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.576703 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7cfb944475-hpwlf"] Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.588894 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:51 crc kubenswrapper[4705]: I0216 15:15:51.589307 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/951d407e-26bd-442f-8519-61650a9a3e70-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.236970 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerStarted","Data":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.239138 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74b44f99fd-mnr7j" event={"ID":"951d407e-26bd-442f-8519-61650a9a3e70","Type":"ContainerDied","Data":"5dc1b5446ccf26eb084458e1080b22b0456b4c0fa87963f6cea8378d62e58a34"} Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255102 4705 scope.go:117] "RemoveContainer" containerID="ba3ee57f8110ed7b8b8a021406da20abdf78e685b064d09eaf75fcdc60dea47e" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.255222 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74b44f99fd-mnr7j" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.284101 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.606394167 podStartE2EDuration="7.284076437s" podCreationTimestamp="2026-02-16 15:15:45 +0000 UTC" firstStartedPulling="2026-02-16 15:15:46.976511274 +0000 UTC m=+1341.161488350" lastFinishedPulling="2026-02-16 15:15:51.654193544 +0000 UTC m=+1345.839170620" observedRunningTime="2026-02-16 15:15:52.26389577 +0000 UTC m=+1346.448872846" watchObservedRunningTime="2026-02-16 15:15:52.284076437 +0000 UTC m=+1346.469053513" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.391942 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.440859 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" path="/var/lib/kubelet/pods/59b661f8-8d2f-45db-ab8d-cd6436cec8eb/volumes" Feb 16 15:15:52 crc kubenswrapper[4705]: I0216 15:15:52.441721 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-74b44f99fd-mnr7j"] Feb 16 15:15:53 crc kubenswrapper[4705]: I0216 15:15:53.872832 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.287258 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" exitCode=0 Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.287560 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerDied","Data":"5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525"} Feb 16 15:15:54 crc kubenswrapper[4705]: I0216 15:15:54.457962 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951d407e-26bd-442f-8519-61650a9a3e70" path="/var/lib/kubelet/pods/951d407e-26bd-442f-8519-61650a9a3e70/volumes" Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298785 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" containerID="cri-o://0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.299181 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" containerID="cri-o://ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298894 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" containerID="cri-o://fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4705]: I0216 15:15:55.298852 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" containerID="cri-o://182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" gracePeriod=30 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314037 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" exitCode=0 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314097 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" exitCode=2 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314113 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" exitCode=0 Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314205 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} Feb 16 15:15:56 crc kubenswrapper[4705]: I0216 15:15:56.314219 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.860067 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861248 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861723 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 16 15:15:57 crc kubenswrapper[4705]: E0216 15:15:57.861761 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-7b7cc9557b-77tq2" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.695818 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.696442 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.696647 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.698725 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:16:01 crc kubenswrapper[4705]: I0216 15:16:01.698797 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" gracePeriod=600 Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.260986 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.326931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327130 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.327430 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") pod \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\" (UID: \"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa\") " Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.344586 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.350123 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v" (OuterVolumeSpecName: "kube-api-access-znx6v") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "kube-api-access-znx6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.398510 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerStarted","Data":"85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.422097 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" exitCode=0 Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.440811 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znx6v\" (UniqueName: \"kubernetes.io/projected/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-kube-api-access-znx6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.440862 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.444436 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b7cc9557b-77tq2" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.468199 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" podStartSLOduration=2.63658934 podStartE2EDuration="20.468169443s" podCreationTimestamp="2026-02-16 15:15:42 +0000 UTC" firstStartedPulling="2026-02-16 15:15:43.647342269 +0000 UTC m=+1337.832319345" lastFinishedPulling="2026-02-16 15:16:01.478922372 +0000 UTC m=+1355.663899448" observedRunningTime="2026-02-16 15:16:02.437327406 +0000 UTC m=+1356.622304482" watchObservedRunningTime="2026-02-16 15:16:02.468169443 +0000 UTC m=+1356.653146519" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.585765 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.586112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b7cc9557b-77tq2" event={"ID":"2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa","Type":"ContainerDied","Data":"00f8e5fe522e813566a78b6896b44d2c17e83898b0bbb39385052b0a457034e8"} Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.586160 4705 scope.go:117] "RemoveContainer" containerID="de600c28f91eecebf3f1afcacfc61ecdebf8796eece435cf86c7979eb622b546" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.590278 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.626064 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data" (OuterVolumeSpecName: "config-data") pod "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" (UID: "2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.688598 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.688648 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.719577 4705 scope.go:117] "RemoveContainer" containerID="5332ed5d6b46f54ae607c9b70194f5301ec1021a28ba2b17a885590439b98525" Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.796217 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:16:02 crc kubenswrapper[4705]: I0216 15:16:02.813455 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7b7cc9557b-77tq2"] Feb 16 15:16:03 crc kubenswrapper[4705]: I0216 15:16:03.475660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.436767 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" path="/var/lib/kubelet/pods/2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa/volumes" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.475996 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.518643 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" exitCode=0 Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.520576 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a635e46-1a87-4961-8a11-8c3c7d7adbd1","Type":"ContainerDied","Data":"0b27ef6e89e2cf0aaae157a2376b147fb79e694acf057c4989514c1f299a5941"} Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.521838 4705 scope.go:117] "RemoveContainer" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.558914 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.558978 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559018 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559044 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559138 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559160 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.559918 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.560175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.568304 4705 scope.go:117] "RemoveContainer" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.589390 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts" (OuterVolumeSpecName: "scripts") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.604495 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr" (OuterVolumeSpecName: "kube-api-access-w2tgr") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "kube-api-access-w2tgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.626153 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.661554 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.662989 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2tgr\" (UniqueName: \"kubernetes.io/projected/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-kube-api-access-w2tgr\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663013 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663023 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663033 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.663042 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.722086 4705 scope.go:117] "RemoveContainer" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.758551 4705 scope.go:117] "RemoveContainer" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.781019 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data podName:3a635e46-1a87-4961-8a11-8c3c7d7adbd1 nodeName:}" failed. No retries permitted until 2026-02-16 15:16:05.280975389 +0000 UTC m=+1359.465952465 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1") : error deleting /var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volume-subpaths: remove /var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volume-subpaths: no such file or directory Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.790443 4705 scope.go:117] "RemoveContainer" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.792780 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": container with ID starting with fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8 not found: ID does not exist" containerID="fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.792833 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8"} err="failed to get container status \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": rpc error: code = NotFound desc = could not find container \"fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8\": container with ID starting with fedef1065cacd2b0703e682306cdf1ad424d5cc880e71d5c78c4e68cf37884f8 not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.792867 4705 scope.go:117] "RemoveContainer" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.793263 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": container with ID starting with 182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b not found: ID does not exist" containerID="182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793383 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b"} err="failed to get container status \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": rpc error: code = NotFound desc = could not find container \"182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b\": container with ID starting with 182ca0b355b2196bdd4f2ed45a3f5589b2920f88c96818953d28a748281c3a0b not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793472 4705 scope.go:117] "RemoveContainer" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.793900 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": container with ID starting with ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a not found: ID does not exist" containerID="ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.793990 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a"} err="failed to get container status \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": rpc error: code = NotFound desc = could not find container \"ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a\": container with ID starting with ec5e544c60fc7659dd8af73af091fbc6bd58320958bca949785dc4fd24aab04a not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.794090 4705 scope.go:117] "RemoveContainer" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.794460 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4705]: E0216 15:16:04.797348 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": container with ID starting with 0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537 not found: ID does not exist" containerID="0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.797428 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537"} err="failed to get container status \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": rpc error: code = NotFound desc = could not find container \"0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537\": container with ID starting with 0b536a0a26a4f9512c5f23137a65617aaa16153183b406206fccf4a11d7bc537 not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4705]: I0216 15:16:04.870715 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.284596 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") pod \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\" (UID: \"3a635e46-1a87-4961-8a11-8c3c7d7adbd1\") " Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.302754 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data" (OuterVolumeSpecName: "config-data") pod "3a635e46-1a87-4961-8a11-8c3c7d7adbd1" (UID: "3a635e46-1a87-4961-8a11-8c3c7d7adbd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.388717 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a635e46-1a87-4961-8a11-8c3c7d7adbd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.458892 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.473439 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.521797 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522388 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522403 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522422 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="init" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522428 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="init" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522438 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522444 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522462 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522468 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522489 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522514 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522520 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522534 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522540 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522564 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522569 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522584 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522591 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.522602 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522610 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522835 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522845 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522857 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-central-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522863 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="proxy-httpd" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522874 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522885 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="sg-core" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522893 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" containerName="ceilometer-notification-agent" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522908 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5d3bb0-44ba-4a4a-8e12-e99f0b9bf9aa" containerName="heat-engine" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.522921 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="541411df-f636-4dab-a4e2-2ecc8933f236" containerName="dnsmasq-dns" Feb 16 15:16:05 crc kubenswrapper[4705]: E0216 15:16:05.523191 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.523203 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b661f8-8d2f-45db-ab8d-cd6436cec8eb" containerName="heat-cfnapi" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.523515 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="951d407e-26bd-442f-8519-61650a9a3e70" containerName="heat-api" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.525984 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.528635 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.528869 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.555578 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698858 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.698998 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699234 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699382 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.699731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802827 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802883 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802926 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802946 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.802971 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.803463 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.803737 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.823462 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.823493 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.824204 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.827399 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.832802 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"ceilometer-0\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " pod="openstack/ceilometer-0" Feb 16 15:16:05 crc kubenswrapper[4705]: I0216 15:16:05.846665 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.441041 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a635e46-1a87-4961-8a11-8c3c7d7adbd1" path="/var/lib/kubelet/pods/3a635e46-1a87-4961-8a11-8c3c7d7adbd1/volumes" Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.544594 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:06 crc kubenswrapper[4705]: I0216 15:16:06.580634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"71417d528440578012a0700050f75b1c04d4288adeeb4513729b50c1c01939e5"} Feb 16 15:16:07 crc kubenswrapper[4705]: I0216 15:16:07.595306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} Feb 16 15:16:08 crc kubenswrapper[4705]: I0216 15:16:08.610650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} Feb 16 15:16:09 crc kubenswrapper[4705]: I0216 15:16:09.623962 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.651054 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerStarted","Data":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.651628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:11 crc kubenswrapper[4705]: I0216 15:16:11.710070 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.536088173 podStartE2EDuration="6.710042067s" podCreationTimestamp="2026-02-16 15:16:05 +0000 UTC" firstStartedPulling="2026-02-16 15:16:06.572988891 +0000 UTC m=+1360.757965967" lastFinishedPulling="2026-02-16 15:16:10.746942785 +0000 UTC m=+1364.931919861" observedRunningTime="2026-02-16 15:16:11.685929638 +0000 UTC m=+1365.870906714" watchObservedRunningTime="2026-02-16 15:16:11.710042067 +0000 UTC m=+1365.895019143" Feb 16 15:16:16 crc kubenswrapper[4705]: I0216 15:16:16.716113 4705 generic.go:334] "Generic (PLEG): container finished" podID="06284688-bd14-48ff-adf1-d0dc441d1238" containerID="85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a" exitCode=0 Feb 16 15:16:16 crc kubenswrapper[4705]: I0216 15:16:16.716195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerDied","Data":"85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a"} Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.177443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346518 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346779 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.346863 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.347019 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") pod \"06284688-bd14-48ff-adf1-d0dc441d1238\" (UID: \"06284688-bd14-48ff-adf1-d0dc441d1238\") " Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.354301 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts" (OuterVolumeSpecName: "scripts") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.357501 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh" (OuterVolumeSpecName: "kube-api-access-rn9xh") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "kube-api-access-rn9xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.388320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data" (OuterVolumeSpecName: "config-data") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.389203 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06284688-bd14-48ff-adf1-d0dc441d1238" (UID: "06284688-bd14-48ff-adf1-d0dc441d1238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449823 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449859 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn9xh\" (UniqueName: \"kubernetes.io/projected/06284688-bd14-48ff-adf1-d0dc441d1238-kube-api-access-rn9xh\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449872 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.449882 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06284688-bd14-48ff-adf1-d0dc441d1238-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" event={"ID":"06284688-bd14-48ff-adf1-d0dc441d1238","Type":"ContainerDied","Data":"f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409"} Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741598 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b9a0b3fcc55910c8b4dfbb0758f383016eccbe6cd9929f6713ec8d06da6409" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.741685 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-sz8ws" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.897628 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:18 crc kubenswrapper[4705]: E0216 15:16:18.898203 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.898223 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.898515 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" containerName="nova-cell0-conductor-db-sync" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.899446 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.907259 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.907557 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:16:18 crc kubenswrapper[4705]: I0216 15:16:18.923976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073107 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.073516 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176209 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176577 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.176609 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.184883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.185145 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.195793 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"nova-cell0-conductor-0\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.230281 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:19 crc kubenswrapper[4705]: I0216 15:16:19.767168 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.769677 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerStarted","Data":"d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299"} Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.770099 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerStarted","Data":"89773dd4ddcc151bb2dd44670cb30683011d6f4c21b57a0cc856f9fb0cb8aa40"} Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.770120 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.796631 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:20 crc kubenswrapper[4705]: I0216 15:16:20.811098 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.811074681 podStartE2EDuration="2.811074681s" podCreationTimestamp="2026-02-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:20.798668152 +0000 UTC m=+1374.983645228" watchObservedRunningTime="2026-02-16 15:16:20.811074681 +0000 UTC m=+1374.996051757" Feb 16 15:16:22 crc kubenswrapper[4705]: I0216 15:16:22.797675 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.188330 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189000 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" containerID="cri-o://a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189153 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" containerID="cri-o://a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189196 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" containerID="cri-o://89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.189230 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" containerID="cri-o://aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.228753 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.229:3000/\": EOF" Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327116 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327462 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" containerID="cri-o://7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.327619 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" containerID="cri-o://0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" gracePeriod=30 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.810897 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" exitCode=143 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.810972 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814832 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" exitCode=0 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814869 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" exitCode=2 Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814894 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} Feb 16 15:16:23 crc kubenswrapper[4705]: I0216 15:16:23.814926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.853801 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.853681 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" exitCode=0 Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998063 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998456 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" containerID="cri-o://dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" gracePeriod=30 Feb 16 15:16:24 crc kubenswrapper[4705]: I0216 15:16:24.998593 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" containerID="cri-o://365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" gracePeriod=30 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.859129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.900626 4705 generic.go:334] "Generic (PLEG): container finished" podID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" exitCode=0 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.900925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.903877 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8afeb982-5b6c-4224-a38d-ce53a6e37f86","Type":"ContainerDied","Data":"71417d528440578012a0700050f75b1c04d4288adeeb4513729b50c1c01939e5"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.903964 4705 scope.go:117] "RemoveContainer" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.904965 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.910201 4705 generic.go:334] "Generic (PLEG): container finished" podID="2678da20-6fd3-430b-8841-40842382c4fb" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" exitCode=143 Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.910283 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.914523 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.914589 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916698 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916835 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.916883 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.917023 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") pod \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\" (UID: \"8afeb982-5b6c-4224-a38d-ce53a6e37f86\") " Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.918552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.918916 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.922777 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts" (OuterVolumeSpecName: "scripts") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929604 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929660 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.929676 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8afeb982-5b6c-4224-a38d-ce53a6e37f86-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.953259 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9" (OuterVolumeSpecName: "kube-api-access-vjgd9") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "kube-api-access-vjgd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:25 crc kubenswrapper[4705]: I0216 15:16:25.968663 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.013637 4705 scope.go:117] "RemoveContainer" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.037230 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.037271 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgd9\" (UniqueName: \"kubernetes.io/projected/8afeb982-5b6c-4224-a38d-ce53a6e37f86-kube-api-access-vjgd9\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.042997 4705 scope.go:117] "RemoveContainer" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.092396 4705 scope.go:117] "RemoveContainer" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.092330 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.125233 4705 scope.go:117] "RemoveContainer" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.127501 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": container with ID starting with a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22 not found: ID does not exist" containerID="a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.127552 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22"} err="failed to get container status \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": rpc error: code = NotFound desc = could not find container \"a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22\": container with ID starting with a5af26a87fb05ef68d7e96a6da26e9a16f4ab484ed4a8f00aa468d86819d9d22 not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.127603 4705 scope.go:117] "RemoveContainer" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.128296 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": container with ID starting with 89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a not found: ID does not exist" containerID="89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128379 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a"} err="failed to get container status \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": rpc error: code = NotFound desc = could not find container \"89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a\": container with ID starting with 89aaaa44546bc85e9086471f0d53ca8703c813b9b783a80cf4f01cb652bf561a not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128425 4705 scope.go:117] "RemoveContainer" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.128847 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": container with ID starting with aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac not found: ID does not exist" containerID="aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128870 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac"} err="failed to get container status \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": rpc error: code = NotFound desc = could not find container \"aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac\": container with ID starting with aa3c9a9d7d637332a2e61ebb4a4ef4ceed3d691d732027e21c8c2f8478cb0eac not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.128889 4705 scope.go:117] "RemoveContainer" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.129133 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": container with ID starting with a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3 not found: ID does not exist" containerID="a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.129155 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3"} err="failed to get container status \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": rpc error: code = NotFound desc = could not find container \"a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3\": container with ID starting with a172f17c2fd23c37446a2898c251a2db2b2c9233c7c8eeef98a42b561fc676b3 not found: ID does not exist" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.135175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data" (OuterVolumeSpecName: "config-data") pod "8afeb982-5b6c-4224-a38d-ce53a6e37f86" (UID: "8afeb982-5b6c-4224-a38d-ce53a6e37f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.139513 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.139544 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afeb982-5b6c-4224-a38d-ce53a6e37f86-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.264314 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.277552 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.300717 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301283 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301302 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301332 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301342 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301383 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301390 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.301406 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301412 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301659 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="proxy-httpd" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301687 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-notification-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301706 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="ceilometer-central-agent" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.301716 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" containerName="sg-core" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.303954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.306575 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.306577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.323992 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346541 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346579 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346779 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346824 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.346864 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.449932 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.451767 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.451862 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452157 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452277 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.452453 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.454709 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.456418 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8afeb982-5b6c-4224-a38d-ce53a6e37f86" path="/var/lib/kubelet/pods/8afeb982-5b6c-4224-a38d-ce53a6e37f86/volumes" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.457320 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.459113 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:26 crc kubenswrapper[4705]: E0216 15:16:26.461000 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-gdxlf scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="72eeae29-5189-4fbd-936f-62c4bbe94388" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.476333 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.477042 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.479207 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.480322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.488504 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.488773 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.491118 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.922330 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:26 crc kubenswrapper[4705]: I0216 15:16:26.933073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.067588 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068061 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068168 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068327 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068392 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068502 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.068569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") pod \"72eeae29-5189-4fbd-936f-62c4bbe94388\" (UID: \"72eeae29-5189-4fbd-936f-62c4bbe94388\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.073186 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.075468 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.075840 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.077124 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf" (OuterVolumeSpecName: "kube-api-access-gdxlf") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "kube-api-access-gdxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.077221 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data" (OuterVolumeSpecName: "config-data") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.078722 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.085751 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts" (OuterVolumeSpecName: "scripts") pod "72eeae29-5189-4fbd-936f-62c4bbe94388" (UID: "72eeae29-5189-4fbd-936f-62c4bbe94388"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172125 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172178 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdxlf\" (UniqueName: \"kubernetes.io/projected/72eeae29-5189-4fbd-936f-62c4bbe94388-kube-api-access-gdxlf\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172189 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172200 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172210 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72eeae29-5189-4fbd-936f-62c4bbe94388-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172218 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.172226 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72eeae29-5189-4fbd-936f-62c4bbe94388-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.630129 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.787081 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.788418 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789095 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789249 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789361 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.789462 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.788322 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs" (OuterVolumeSpecName: "logs") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790142 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790600 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.790838 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") pod \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\" (UID: \"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73\") " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.791663 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.791735 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.794297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts" (OuterVolumeSpecName: "scripts") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.796185 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5" (OuterVolumeSpecName: "kube-api-access-j75f5") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "kube-api-access-j75f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.852583 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.865280 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (OuterVolumeSpecName: "glance") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.879841 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.883809 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data" (OuterVolumeSpecName: "config-data") pod "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" (UID: "c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894211 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894439 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894504 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j75f5\" (UniqueName: \"kubernetes.io/projected/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-kube-api-access-j75f5\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894559 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894643 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" " Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.894704 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.938814 4705 generic.go:334] "Generic (PLEG): container finished" podID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" exitCode=0 Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940311 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.939582 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73","Type":"ContainerDied","Data":"c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee"} Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.940674 4705 scope.go:117] "RemoveContainer" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.939313 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.941115 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.941417 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e") on node "crc" Feb 16 15:16:27 crc kubenswrapper[4705]: I0216 15:16:27.985622 4705 scope.go:117] "RemoveContainer" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.000817 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.014957 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.039801 4705 scope.go:117] "RemoveContainer" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.042543 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": container with ID starting with 0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870 not found: ID does not exist" containerID="0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.042649 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870"} err="failed to get container status \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": rpc error: code = NotFound desc = could not find container \"0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870\": container with ID starting with 0cd283d8f413f7f45e85a699f70d6eef33f9c3f0f38a01a3de0425fd7f0a1870 not found: ID does not exist" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.042696 4705 scope.go:117] "RemoveContainer" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.043562 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": container with ID starting with 7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89 not found: ID does not exist" containerID="7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.043650 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89"} err="failed to get container status \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": rpc error: code = NotFound desc = could not find container \"7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89\": container with ID starting with 7cd8df74d15f810f453fd28403453c63c9874dc17f512b273c6d79e6ae274a89 not found: ID does not exist" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.091838 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.155462 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.189151 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.204618 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.205590 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.205637 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.205671 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.205677 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.206025 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-httpd" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.206068 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" containerName="glance-log" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.227218 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.233437 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.233757 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.259820 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.292322 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.294953 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.297483 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.299925 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:16:28 crc kubenswrapper[4705]: E0216 15:16:28.312060 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72eeae29_5189_4fbd_936f_62c4bbe94388.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2678da20_6fd3_430b_8841_40842382c4fb.slice/crio-365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f8a7c2_28a1_45b0_ac6a_9b6f33ac1a73.slice/crio-c5ba3670e10f24ad54fc42896f6a8dbf5c3da085d24f0211e76f0de678b672ee\": RecentStats: unable to find data in memory cache]" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.325601 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327232 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327392 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327437 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.327921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.328141 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.328550 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431152 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431215 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431238 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431290 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431306 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431358 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431413 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431441 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431484 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431546 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431589 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431633 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.431658 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.435893 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.436582 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.442110 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.442649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.447181 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72eeae29-5189-4fbd-936f-62c4bbe94388" path="/var/lib/kubelet/pods/72eeae29-5189-4fbd-936f-62c4bbe94388/volumes" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.448450 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73" path="/var/lib/kubelet/pods/c8f8a7c2-28a1-45b0-ac6a-9b6f33ac1a73/volumes" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.454549 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.461246 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.473853 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.535544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536254 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536342 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536436 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536528 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536916 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536958 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.541106 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-logs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.545910 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.536039 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.548750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.551183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.551565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.577448 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.577499 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f9a0afde09cde3194ac3fcfa9df7bd80860335646625dfa8f7f213d22f9d05/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.578004 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kw6\" (UniqueName: \"kubernetes.io/projected/2ef0b445-ec9e-4c58-a7d3-59068664d3ca-kube-api-access-84kw6\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.641214 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.683952 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a50bf1b-9ab8-494b-b754-827afdf8b94e\") pod \"glance-default-external-api-0\" (UID: \"2ef0b445-ec9e-4c58-a7d3-59068664d3ca\") " pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.753186 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.846609 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.847016 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848588 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848638 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848683 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.848715 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") pod \"2678da20-6fd3-430b-8841-40842382c4fb\" (UID: \"2678da20-6fd3-430b-8841-40842382c4fb\") " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.849748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs" (OuterVolumeSpecName: "logs") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.851046 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.852885 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.852914 4705 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2678da20-6fd3-430b-8841-40842382c4fb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.865550 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts" (OuterVolumeSpecName: "scripts") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.873355 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6" (OuterVolumeSpecName: "kube-api-access-v6jf6") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "kube-api-access-v6jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.900888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (OuterVolumeSpecName: "glance") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.919484 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.950207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956229 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956271 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6jf6\" (UniqueName: \"kubernetes.io/projected/2678da20-6fd3-430b-8841-40842382c4fb-kube-api-access-v6jf6\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956286 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.956329 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" " Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.957104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data" (OuterVolumeSpecName: "config-data") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.969751 4705 generic.go:334] "Generic (PLEG): container finished" podID="2678da20-6fd3-430b-8841-40842382c4fb" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" exitCode=0 Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970030 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970121 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2678da20-6fd3-430b-8841-40842382c4fb","Type":"ContainerDied","Data":"15fe536f2d1e7276c5b6aa9bd3efbc8aff43c887dcf49127f48384d48325f958"} Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970204 4705 scope.go:117] "RemoveContainer" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:28 crc kubenswrapper[4705]: I0216 15:16:28.970497 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.021248 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2678da20-6fd3-430b-8841-40842382c4fb" (UID: "2678da20-6fd3-430b-8841-40842382c4fb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.035305 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.035809 4705 scope.go:117] "RemoveContainer" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.041021 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157") on node "crc" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059259 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059304 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.059317 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2678da20-6fd3-430b-8841-40842382c4fb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.078268 4705 scope.go:117] "RemoveContainer" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.079827 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": container with ID starting with 365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643 not found: ID does not exist" containerID="365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.079880 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643"} err="failed to get container status \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": rpc error: code = NotFound desc = could not find container \"365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643\": container with ID starting with 365a7f5b865d99c4425aa811713fad51d4cced774a46e07ccc602eb38c65f643 not found: ID does not exist" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.079907 4705 scope.go:117] "RemoveContainer" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.085384 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": container with ID starting with dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9 not found: ID does not exist" containerID="dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.085429 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9"} err="failed to get container status \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": rpc error: code = NotFound desc = could not find container \"dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9\": container with ID starting with dfc4ed7d70627c9ac52c8b07bd64f52ccdf30986d5c9e2ddf43be218a16c28a9 not found: ID does not exist" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.235800 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.238319 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.240298 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.240342 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.330461 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.354842 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.371868 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.389572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.390269 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390290 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: E0216 15:16:29.390328 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390335 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390601 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-httpd" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.390651 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678da20-6fd3-430b-8841-40842382c4fb" containerName="glance-log" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.392221 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.396508 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.397204 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.403007 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.482780 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483070 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483260 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483422 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483562 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483594 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.483720 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.586986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587047 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587158 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587264 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587338 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587460 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.587489 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.588263 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.588741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ba576c-ee01-48ea-b78b-a2bea81b90a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.594740 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.597315 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.598052 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.604194 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.604243 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5d44f58a274729942503542a04ea080ac58862a31aa07a9ece94d5eb6543b70/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.621034 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ba576c-ee01-48ea-b78b-a2bea81b90a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.626173 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7db\" (UniqueName: \"kubernetes.io/projected/28ba576c-ee01-48ea-b78b-a2bea81b90a2-kube-api-access-cm7db\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:29 crc kubenswrapper[4705]: I0216 15:16:29.973964 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.001581 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"1d1c90bbe89df2444f211fbae43512bd74e7492f1d6052bd985eae43052c1133"} Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.004841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e063cad-c4e6-4b97-bc1d-63a7f02f1157\") pod \"glance-default-internal-api-0\" (UID: \"28ba576c-ee01-48ea-b78b-a2bea81b90a2\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.005188 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"42409d76c6328a1e20bf61bf099083b597e7541e6ba3851697295b44a1a71728"} Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.053551 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.440475 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2678da20-6fd3-430b-8841-40842382c4fb" path="/var/lib/kubelet/pods/2678da20-6fd3-430b-8841-40842382c4fb/volumes" Feb 16 15:16:30 crc kubenswrapper[4705]: W0216 15:16:30.800544 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ba576c_ee01_48ea_b78b_a2bea81b90a2.slice/crio-b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444 WatchSource:0}: Error finding container b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444: Status 404 returned error can't find the container with id b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444 Feb 16 15:16:30 crc kubenswrapper[4705]: I0216 15:16:30.801158 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.037503 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.050031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"b94c3fcca6af45e8c10f9af8f5f71a8234e67c0beb8c712cb04863449935e444"} Feb 16 15:16:31 crc kubenswrapper[4705]: I0216 15:16:31.061564 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"fc15a68c46e4a01f1bfd32ecf47726ae0ce0940adb334ef0150d24181c9ce669"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.084605 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"7233b5edcef46f692b6133525276a43ea82217316fe4a9039c193bc50033373b"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.092495 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ef0b445-ec9e-4c58-a7d3-59068664d3ca","Type":"ContainerStarted","Data":"ba000d59d264edaf8176f1a2f76b35d2d7f5a1361b20eec29741f809ac8aed78"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.107153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} Feb 16 15:16:32 crc kubenswrapper[4705]: I0216 15:16:32.134714 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.134686319 podStartE2EDuration="4.134686319s" podCreationTimestamp="2026-02-16 15:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:32.123974788 +0000 UTC m=+1386.308951874" watchObservedRunningTime="2026-02-16 15:16:32.134686319 +0000 UTC m=+1386.319663395" Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.128913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28ba576c-ee01-48ea-b78b-a2bea81b90a2","Type":"ContainerStarted","Data":"2e53c8695ea3efda2c26ae056ef8a355b94fc82df9cb941815930162fec0b6de"} Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.137209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.816704 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.8166777530000005 podStartE2EDuration="4.816677753s" podCreationTimestamp="2026-02-16 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:33.173814933 +0000 UTC m=+1387.358792009" watchObservedRunningTime="2026-02-16 15:16:33.816677753 +0000 UTC m=+1388.001654829" Feb 16 15:16:33 crc kubenswrapper[4705]: I0216 15:16:33.827978 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.151835 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerStarted","Data":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.151905 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:34 crc kubenswrapper[4705]: I0216 15:16:34.187361 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216078701 podStartE2EDuration="6.187338655s" podCreationTimestamp="2026-02-16 15:16:28 +0000 UTC" firstStartedPulling="2026-02-16 15:16:29.347392089 +0000 UTC m=+1383.532369165" lastFinishedPulling="2026-02-16 15:16:33.318652043 +0000 UTC m=+1387.503629119" observedRunningTime="2026-02-16 15:16:34.174771641 +0000 UTC m=+1388.359748757" watchObservedRunningTime="2026-02-16 15:16:34.187338655 +0000 UTC m=+1388.372315721" Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.233178 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.234552 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.236559 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:34 crc kubenswrapper[4705]: E0216 15:16:34.236614 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159774 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" containerID="cri-o://c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" containerID="cri-o://f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" containerID="cri-o://b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" gracePeriod=30 Feb 16 15:16:35 crc kubenswrapper[4705]: I0216 15:16:35.159861 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" containerID="cri-o://ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" gracePeriod=30 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174355 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" exitCode=0 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174910 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" exitCode=2 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174926 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" exitCode=0 Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174422 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.174988 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} Feb 16 15:16:36 crc kubenswrapper[4705]: I0216 15:16:36.175011 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.943492 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.946266 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.952154 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.952220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:16:38 crc kubenswrapper[4705]: I0216 15:16:38.980000 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.076510 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.082626 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.105904 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.106019 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.214158 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.221515 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.221610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.225197 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.234761 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.234904 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.244593 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.244633 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.245734 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.246612 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.247641 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.251426 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:39 crc kubenswrapper[4705]: E0216 15:16:39.251473 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.277445 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"aodh-db-create-sz982\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.281530 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.324735 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.325226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.431745 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.432121 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.433290 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.465669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"aodh-1473-account-create-update-mpxtv\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.580423 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:39 crc kubenswrapper[4705]: I0216 15:16:39.914203 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.055301 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.055436 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.125047 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.158867 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.261211 4705 generic.go:334] "Generic (PLEG): container finished" podID="3a49bd2f-26b0-4969-86db-cd980251a202" containerID="6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" exitCode=137 Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.262549 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerDied","Data":"6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60"} Feb 16 15:16:40 crc kubenswrapper[4705]: W0216 15:16:40.268899 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod885bde30_8f11_4a3f_b1ed_db26e4aa4ab2.slice/crio-27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d WatchSource:0}: Error finding container 27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d: Status 404 returned error can't find the container with id 27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.276396 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerStarted","Data":"c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5"} Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.276591 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.277067 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerStarted","Data":"9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77"} Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.277959 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.299179 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.320498 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-sz982" podStartSLOduration=2.32046625 podStartE2EDuration="2.32046625s" podCreationTimestamp="2026-02-16 15:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:40.297606836 +0000 UTC m=+1394.482583912" watchObservedRunningTime="2026-02-16 15:16:40.32046625 +0000 UTC m=+1394.505443326" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.560428 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674290 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674411 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674514 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.674922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") pod \"3a49bd2f-26b0-4969-86db-cd980251a202\" (UID: \"3a49bd2f-26b0-4969-86db-cd980251a202\") " Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.692758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.692837 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8" (OuterVolumeSpecName: "kube-api-access-m8dr8") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "kube-api-access-m8dr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.760003 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778400 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dr8\" (UniqueName: \"kubernetes.io/projected/3a49bd2f-26b0-4969-86db-cd980251a202-kube-api-access-m8dr8\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778441 4705 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.778452 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.784543 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data" (OuterVolumeSpecName: "config-data") pod "3a49bd2f-26b0-4969-86db-cd980251a202" (UID: "3a49bd2f-26b0-4969-86db-cd980251a202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:40 crc kubenswrapper[4705]: I0216 15:16:40.881070 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a49bd2f-26b0-4969-86db-cd980251a202-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.224244 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.290699 4705 generic.go:334] "Generic (PLEG): container finished" podID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerID="c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.290775 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerDied","Data":"c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293036 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293486 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293519 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293558 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293665 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.293878 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") pod \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\" (UID: \"ad341212-f2ac-4c6d-81cd-1113a9a524b2\") " Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.298164 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.299736 4705 generic.go:334] "Generic (PLEG): container finished" podID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerID="550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerDied","Data":"550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300170 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerStarted","Data":"27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.300224 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.306821 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc" (OuterVolumeSpecName: "kube-api-access-z7lqc") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "kube-api-access-z7lqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.306909 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts" (OuterVolumeSpecName: "scripts") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316476 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-656d9cf494-c6m8t" event={"ID":"3a49bd2f-26b0-4969-86db-cd980251a202","Type":"ContainerDied","Data":"75b8ea33afa2dc74710b8197cd60788f65dd6c58802ff69550dde775ef900e97"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316542 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-656d9cf494-c6m8t" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.316552 4705 scope.go:117] "RemoveContainer" containerID="6ca9b1a8d277b8ac8e146f701cfca1d79427d28cc9235476ddd2bf5977afbd60" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.343430 4705 generic.go:334] "Generic (PLEG): container finished" podID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" exitCode=0 Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344082 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344675 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.344766 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad341212-f2ac-4c6d-81cd-1113a9a524b2","Type":"ContainerDied","Data":"1d1c90bbe89df2444f211fbae43512bd74e7492f1d6052bd985eae43052c1133"} Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.372716 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.399196 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.402910 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403151 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7lqc\" (UniqueName: \"kubernetes.io/projected/ad341212-f2ac-4c6d-81cd-1113a9a524b2-kube-api-access-z7lqc\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403229 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad341212-f2ac-4c6d-81cd-1113a9a524b2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.403305 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.481023 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.506036 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.549755 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data" (OuterVolumeSpecName: "config-data") pod "ad341212-f2ac-4c6d-81cd-1113a9a524b2" (UID: "ad341212-f2ac-4c6d-81cd-1113a9a524b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.608988 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad341212-f2ac-4c6d-81cd-1113a9a524b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.629023 4705 scope.go:117] "RemoveContainer" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.640246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.660650 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-656d9cf494-c6m8t"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.661417 4705 scope.go:117] "RemoveContainer" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.714915 4705 scope.go:117] "RemoveContainer" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.750162 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.771915 4705 scope.go:117] "RemoveContainer" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.782411 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.798488 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799507 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799578 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799616 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799648 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799662 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799669 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799691 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799699 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.799735 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.799743 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800129 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-central-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800157 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="ceilometer-notification-agent" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800174 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="sg-core" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800195 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" containerName="heat-api" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.800210 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" containerName="proxy-httpd" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.802901 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.807824 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.808289 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.817378 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.826223 4705 scope.go:117] "RemoveContainer" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.827640 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": container with ID starting with b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520 not found: ID does not exist" containerID="b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.827732 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520"} err="failed to get container status \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": rpc error: code = NotFound desc = could not find container \"b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520\": container with ID starting with b403c1c6f1bfe4a30a9cdf36f65d62a76829d30218876f17ac69c6a4bae8d520 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.827815 4705 scope.go:117] "RemoveContainer" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.828329 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": container with ID starting with f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094 not found: ID does not exist" containerID="f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.828424 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094"} err="failed to get container status \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": rpc error: code = NotFound desc = could not find container \"f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094\": container with ID starting with f363c6bcc943c80638c3cd999d1e723aecc79b64f7aeccda43e3914778111094 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.828501 4705 scope.go:117] "RemoveContainer" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.828983 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": container with ID starting with ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466 not found: ID does not exist" containerID="ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829062 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466"} err="failed to get container status \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": rpc error: code = NotFound desc = could not find container \"ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466\": container with ID starting with ce6d1c86099b262dc4aeb55982ff3089fb7f5c5dd978f4ed84c60aa354864466 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829123 4705 scope.go:117] "RemoveContainer" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: E0216 15:16:41.829397 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": container with ID starting with c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14 not found: ID does not exist" containerID="c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.829482 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14"} err="failed to get container status \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": rpc error: code = NotFound desc = could not find container \"c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14\": container with ID starting with c8f2708dde913a730e7cd9bd6de6cb685ddede73526314253d45178aa3392b14 not found: ID does not exist" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.939999 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940078 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940197 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940308 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940338 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4705]: I0216 15:16:41.940389 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.042702 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043222 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043341 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043492 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043696 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.043891 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.044047 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.044293 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.051889 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.053346 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.054146 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.059116 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.070658 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"ceilometer-0\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.138797 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.378756 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.379082 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.470635 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a49bd2f-26b0-4969-86db-cd980251a202" path="/var/lib/kubelet/pods/3a49bd2f-26b0-4969-86db-cd980251a202/volumes" Feb 16 15:16:42 crc kubenswrapper[4705]: I0216 15:16:42.471580 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad341212-f2ac-4c6d-81cd-1113a9a524b2" path="/var/lib/kubelet/pods/ad341212-f2ac-4c6d-81cd-1113a9a524b2/volumes" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.141560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.212505 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.218180 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.218328 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.220949 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.345608 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.358015 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.419197 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433119 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1473-account-create-update-mpxtv" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433111 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1473-account-create-update-mpxtv" event={"ID":"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2","Type":"ContainerDied","Data":"27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.433528 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d187140adb7c9667517d18bd334790d7f3f52282fe668dee19c30ae71b795d" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.435797 4705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.436980 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-sz982" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.437158 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-sz982" event={"ID":"481dd88a-36b9-432c-9d21-9221f5e98e6e","Type":"ContainerDied","Data":"9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77"} Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.437181 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfce51868e78850a9a6331e47086bb0f35c3889bc3ab1b4af9675f440589e77" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514655 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") pod \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514720 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") pod \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\" (UID: \"885bde30-8f11-4a3f-b1ed-db26e4aa4ab2\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") pod \"481dd88a-36b9-432c-9d21-9221f5e98e6e\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.514978 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") pod \"481dd88a-36b9-432c-9d21-9221f5e98e6e\" (UID: \"481dd88a-36b9-432c-9d21-9221f5e98e6e\") " Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.516602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "481dd88a-36b9-432c-9d21-9221f5e98e6e" (UID: "481dd88a-36b9-432c-9d21-9221f5e98e6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.516732 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" (UID: "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.519134 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.519166 4705 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481dd88a-36b9-432c-9d21-9221f5e98e6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.522642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6" (OuterVolumeSpecName: "kube-api-access-vfzk6") pod "481dd88a-36b9-432c-9d21-9221f5e98e6e" (UID: "481dd88a-36b9-432c-9d21-9221f5e98e6e"). InnerVolumeSpecName "kube-api-access-vfzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.532895 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn" (OuterVolumeSpecName: "kube-api-access-6d9jn") pod "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" (UID: "885bde30-8f11-4a3f-b1ed-db26e4aa4ab2"). InnerVolumeSpecName "kube-api-access-6d9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.622286 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d9jn\" (UniqueName: \"kubernetes.io/projected/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2-kube-api-access-6d9jn\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.622336 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfzk6\" (UniqueName: \"kubernetes.io/projected/481dd88a-36b9-432c-9d21-9221f5e98e6e-kube-api-access-vfzk6\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4705]: I0216 15:16:43.873785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.233476 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.236135 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.237878 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:44 crc kubenswrapper[4705]: E0216 15:16:44.237939 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:44 crc kubenswrapper[4705]: I0216 15:16:44.461924 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47"} Feb 16 15:16:45 crc kubenswrapper[4705]: I0216 15:16:45.474113 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553"} Feb 16 15:16:46 crc kubenswrapper[4705]: I0216 15:16:46.499326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30"} Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.521038 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerStarted","Data":"637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1"} Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.522119 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:47 crc kubenswrapper[4705]: I0216 15:16:47.553388 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.146810362 podStartE2EDuration="6.553350587s" podCreationTimestamp="2026-02-16 15:16:41 +0000 UTC" firstStartedPulling="2026-02-16 15:16:43.241155529 +0000 UTC m=+1397.426132595" lastFinishedPulling="2026-02-16 15:16:46.647695754 +0000 UTC m=+1400.832672820" observedRunningTime="2026-02-16 15:16:47.544500248 +0000 UTC m=+1401.729477334" watchObservedRunningTime="2026-02-16 15:16:47.553350587 +0000 UTC m=+1401.738327673" Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.232992 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.235458 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.237899 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.237948 4705 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556109 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.556800 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556824 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: E0216 15:16:49.556869 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.556881 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.557429 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" containerName="mariadb-account-create-update" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.557494 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" containerName="mariadb-database-create" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.558609 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.564266 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.564785 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.565494 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.565752 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.576633 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.601736 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.602160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.602732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.603289 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.706872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.707221 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.715138 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.717024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.724404 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.728219 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"aodh-db-sync-6brrx\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:49 crc kubenswrapper[4705]: I0216 15:16:49.894798 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:50 crc kubenswrapper[4705]: I0216 15:16:50.440504 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:16:50 crc kubenswrapper[4705]: W0216 15:16:50.448051 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf60aeda_83a7_4d56_95a6_c390c2d08b8a.slice/crio-0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511 WatchSource:0}: Error finding container 0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511: Status 404 returned error can't find the container with id 0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511 Feb 16 15:16:50 crc kubenswrapper[4705]: I0216 15:16:50.557523 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerStarted","Data":"0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511"} Feb 16 15:16:53 crc kubenswrapper[4705]: I0216 15:16:53.635678 4705 generic.go:334] "Generic (PLEG): container finished" podID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" exitCode=137 Feb 16 15:16:53 crc kubenswrapper[4705]: I0216 15:16:53.636027 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerDied","Data":"d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299"} Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.231884 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.232268 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.232984 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 16 15:16:54 crc kubenswrapper[4705]: E0216 15:16:54.233186 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.255142 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287758 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.287882 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") pod \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\" (UID: \"3e47e02d-1f4b-44d5-b6c7-d12353efb4db\") " Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.303680 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw" (OuterVolumeSpecName: "kube-api-access-729zw") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "kube-api-access-729zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.325682 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data" (OuterVolumeSpecName: "config-data") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.326231 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e47e02d-1f4b-44d5-b6c7-d12353efb4db" (UID: "3e47e02d-1f4b-44d5-b6c7-d12353efb4db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391875 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391925 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.391938 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-729zw\" (UniqueName: \"kubernetes.io/projected/3e47e02d-1f4b-44d5-b6c7-d12353efb4db-kube-api-access-729zw\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660644 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e47e02d-1f4b-44d5-b6c7-d12353efb4db","Type":"ContainerDied","Data":"89773dd4ddcc151bb2dd44670cb30683011d6f4c21b57a0cc856f9fb0cb8aa40"} Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.660731 4705 scope.go:117] "RemoveContainer" containerID="d37f446e39f63df5ea6d9317798ed8d92127330a3049de551d5f8823b35fc299" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.662890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerStarted","Data":"156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725"} Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.688852 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6brrx" podStartSLOduration=2.381264803 podStartE2EDuration="6.688835601s" podCreationTimestamp="2026-02-16 15:16:49 +0000 UTC" firstStartedPulling="2026-02-16 15:16:50.450522293 +0000 UTC m=+1404.635499369" lastFinishedPulling="2026-02-16 15:16:54.758093081 +0000 UTC m=+1408.943070167" observedRunningTime="2026-02-16 15:16:55.683856251 +0000 UTC m=+1409.868833327" watchObservedRunningTime="2026-02-16 15:16:55.688835601 +0000 UTC m=+1409.873812677" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.772999 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.790197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.823826 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: E0216 15:16:55.824505 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.824529 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.824839 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" containerName="nova-cell0-conductor-conductor" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.826144 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.829182 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mq9hp" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.829479 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.852776 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.908833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.908923 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:55 crc kubenswrapper[4705]: I0216 15:16:55.909199 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011100 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.011234 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.015883 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.026168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d5bb097-aa56-4b02-942e-70b894afe84a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.040129 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v6q9\" (UniqueName: \"kubernetes.io/projected/4d5bb097-aa56-4b02-942e-70b894afe84a-kube-api-access-8v6q9\") pod \"nova-cell0-conductor-0\" (UID: \"4d5bb097-aa56-4b02-942e-70b894afe84a\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.149364 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.444099 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e47e02d-1f4b-44d5-b6c7-d12353efb4db" path="/var/lib/kubelet/pods/3e47e02d-1f4b-44d5-b6c7-d12353efb4db/volumes" Feb 16 15:16:56 crc kubenswrapper[4705]: I0216 15:16:56.710042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:16:56 crc kubenswrapper[4705]: W0216 15:16:56.718586 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d5bb097_aa56_4b02_942e_70b894afe84a.slice/crio-fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666 WatchSource:0}: Error finding container fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666: Status 404 returned error can't find the container with id fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666 Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.703191 4705 generic.go:334] "Generic (PLEG): container finished" podID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerID="156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725" exitCode=0 Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.703304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerDied","Data":"156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.706261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d5bb097-aa56-4b02-942e-70b894afe84a","Type":"ContainerStarted","Data":"81e8f4e116902e62b97158e552714c3661e953fa5a6ad6d50ae6d9172f24e2f0"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.707078 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d5bb097-aa56-4b02-942e-70b894afe84a","Type":"ContainerStarted","Data":"fa9641fcf07810e49dbf27210b544a2b90ea71df4af044f74754e15b9bead666"} Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.707181 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:16:57 crc kubenswrapper[4705]: I0216 15:16:57.780069 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.780030022 podStartE2EDuration="2.780030022s" podCreationTimestamp="2026-02-16 15:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:57.756061026 +0000 UTC m=+1411.941038122" watchObservedRunningTime="2026-02-16 15:16:57.780030022 +0000 UTC m=+1411.965007138" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.187479 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.312329 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.312982 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.313167 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.313435 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") pod \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\" (UID: \"bf60aeda-83a7-4d56-95a6-c390c2d08b8a\") " Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.320151 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np" (OuterVolumeSpecName: "kube-api-access-jg2np") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "kube-api-access-jg2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.326616 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts" (OuterVolumeSpecName: "scripts") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.347910 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.352541 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data" (OuterVolumeSpecName: "config-data") pod "bf60aeda-83a7-4d56-95a6-c390c2d08b8a" (UID: "bf60aeda-83a7-4d56-95a6-c390c2d08b8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416346 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416397 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg2np\" (UniqueName: \"kubernetes.io/projected/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-kube-api-access-jg2np\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416409 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.416421 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf60aeda-83a7-4d56-95a6-c390c2d08b8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6brrx" event={"ID":"bf60aeda-83a7-4d56-95a6-c390c2d08b8a","Type":"ContainerDied","Data":"0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511"} Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732513 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f94d753f4a7598743484baf7eeff05603053d694e224df6131eecb5e01ae511" Feb 16 15:16:59 crc kubenswrapper[4705]: I0216 15:16:59.732538 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6brrx" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.184872 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.672027 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:01 crc kubenswrapper[4705]: E0216 15:17:01.672926 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.672958 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.673256 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" containerName="aodh-db-sync" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.674537 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.676873 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.680458 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.692600 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801757 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801825 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.801928 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.837982 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.839903 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.844778 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.892524 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905296 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.905585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.914863 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.915143 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.915697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.959686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"nova-cell0-cell-mapping-v8zp2\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.965399 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.968620 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:01 crc kubenswrapper[4705]: I0216 15:17:01.979273 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:01.999356 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.001009 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.001851 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.021095 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037185 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037351 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037413 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037445 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037472 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037527 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.037596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.085722 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.087949 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.100481 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.151220 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.152838 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.152995 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153042 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153083 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153108 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.153174 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.189520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190532 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.190023 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.191278 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.191708 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.214831 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"aodh-0\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.223291 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257360 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257714 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257769 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.257833 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.274890 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.276817 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.282704 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.324430 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.355963 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.358755 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361556 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361711 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361730 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.361771 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.362141 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.363863 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.374114 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.379841 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.416207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.449345 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"nova-api-0\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.470804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.489851 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.520401 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.521016 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.521149 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523217 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523322 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523494 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.523563 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.539304 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.544080 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.546607 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626145 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626642 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626799 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.626852 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.627722 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.632836 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.636669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.642920 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.663689 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.665710 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.676852 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.682232 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"nova-metadata-0\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.729871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.729989 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730047 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730104 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730180 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.730210 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.740036 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.769902 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839723 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839837 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839897 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.839959 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.840040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.840073 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.841073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842286 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842649 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.842921 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.843266 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.877120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"dnsmasq-dns-9b86998b5-hbrjc\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.934581 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:02 crc kubenswrapper[4705]: I0216 15:17:02.957048 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.302076 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: W0216 15:17:03.549569 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280 WatchSource:0}: Error finding container 107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280: Status 404 returned error can't find the container with id 107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280 Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.598182 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.617613 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.880785 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerStarted","Data":"fc7c9ea585cc1fde92feb6b64f7c9647742d877ff5656a5cd26ed4a40b9bc589"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.909513 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.934360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"1536a95ab5596e441f283dcccf66e85b779a0237afc5c6e0d01652df6f0e34b4"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.934547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.936668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.958819 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.963699 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.966198 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerStarted","Data":"014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.966249 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerStarted","Data":"21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d"} Feb 16 15:17:03 crc kubenswrapper[4705]: I0216 15:17:03.979752 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.078788 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.078918 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.079293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.079417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.190585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191075 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.191159 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.206119 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.206395 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.248675 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.251686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.255965 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.275053 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"nova-cell1-conductor-db-sync-c29kz\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.348976 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.357671 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v8zp2" podStartSLOduration=3.357647729 podStartE2EDuration="3.357647729s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:04.095956607 +0000 UTC m=+1418.280933693" watchObservedRunningTime="2026-02-16 15:17:04.357647729 +0000 UTC m=+1418.542624805" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.564174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:04 crc kubenswrapper[4705]: I0216 15:17:04.993909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerStarted","Data":"2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.000843 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"dee0ea11222770d7565040c2a8d452d725637a688407fbd260ff2426c890c0e6"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.020596 4705 generic.go:334] "Generic (PLEG): container finished" podID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" exitCode=0 Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.022525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.022608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerStarted","Data":"ede06e3254a42f9f6eec0ac56c7e1b7e4b102971ccf37608944546f6accc4101"} Feb 16 15:17:05 crc kubenswrapper[4705]: I0216 15:17:05.280198 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.053475 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerStarted","Data":"5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.054014 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerStarted","Data":"f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.056761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerStarted","Data":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.057035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.086737 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c29kz" podStartSLOduration=3.086717499 podStartE2EDuration="3.086717499s" podCreationTimestamp="2026-02-16 15:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:06.081963115 +0000 UTC m=+1420.266940191" watchObservedRunningTime="2026-02-16 15:17:06.086717499 +0000 UTC m=+1420.271694575" Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.135985 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.153592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:06 crc kubenswrapper[4705]: I0216 15:17:06.167264 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" podStartSLOduration=4.1672335369999995 podStartE2EDuration="4.167233537s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:06.104469009 +0000 UTC m=+1420.289446085" watchObservedRunningTime="2026-02-16 15:17:06.167233537 +0000 UTC m=+1420.352210613" Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.889237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.890205 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" containerID="cri-o://48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891330 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" containerID="cri-o://637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891396 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" containerID="cri-o://56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.891436 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" containerID="cri-o://dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" gracePeriod=30 Feb 16 15:17:08 crc kubenswrapper[4705]: I0216 15:17:08.904668 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": EOF" Feb 16 15:17:09 crc kubenswrapper[4705]: I0216 15:17:09.121095 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" exitCode=2 Feb 16 15:17:09 crc kubenswrapper[4705]: I0216 15:17:09.121181 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.135796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerStarted","Data":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.136230 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.137855 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.140909 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.140942 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerStarted","Data":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.141108 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" containerID="cri-o://4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.141145 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" containerID="cri-o://a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" gracePeriod=30 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.144963 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerStarted","Data":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149563 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" exitCode=0 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149601 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" exitCode=0 Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149647 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.149680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.151828 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.151890 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerStarted","Data":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.164194 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.985916268 podStartE2EDuration="8.164175785s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="2026-02-16 15:17:04.148035494 +0000 UTC m=+1418.333012570" lastFinishedPulling="2026-02-16 15:17:09.326295011 +0000 UTC m=+1423.511272087" observedRunningTime="2026-02-16 15:17:10.163053433 +0000 UTC m=+1424.348030509" watchObservedRunningTime="2026-02-16 15:17:10.164175785 +0000 UTC m=+1424.349152861" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.200384 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.440189565 podStartE2EDuration="9.200348744s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.57175859 +0000 UTC m=+1417.756735666" lastFinishedPulling="2026-02-16 15:17:09.331917769 +0000 UTC m=+1423.516894845" observedRunningTime="2026-02-16 15:17:10.197323659 +0000 UTC m=+1424.382300735" watchObservedRunningTime="2026-02-16 15:17:10.200348744 +0000 UTC m=+1424.385325820" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.274560 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.261138752 podStartE2EDuration="9.272943629s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.309881774 +0000 UTC m=+1417.494858850" lastFinishedPulling="2026-02-16 15:17:09.321686651 +0000 UTC m=+1423.506663727" observedRunningTime="2026-02-16 15:17:10.218764253 +0000 UTC m=+1424.403741329" watchObservedRunningTime="2026-02-16 15:17:10.272943629 +0000 UTC m=+1424.457920705" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.290829 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.046143955 podStartE2EDuration="8.290806092s" podCreationTimestamp="2026-02-16 15:17:02 +0000 UTC" firstStartedPulling="2026-02-16 15:17:04.076582301 +0000 UTC m=+1418.261559387" lastFinishedPulling="2026-02-16 15:17:09.321244448 +0000 UTC m=+1423.506221524" observedRunningTime="2026-02-16 15:17:10.247179933 +0000 UTC m=+1424.432157019" watchObservedRunningTime="2026-02-16 15:17:10.290806092 +0000 UTC m=+1424.475783168" Feb 16 15:17:10 crc kubenswrapper[4705]: I0216 15:17:10.689021 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:11 crc kubenswrapper[4705]: I0216 15:17:11.189965 4705 generic.go:334] "Generic (PLEG): container finished" podID="c403fb44-6250-449b-b257-953b925c635a" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" exitCode=143 Feb 16 15:17:11 crc kubenswrapper[4705]: I0216 15:17:11.191316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.205676 4705 generic.go:334] "Generic (PLEG): container finished" podID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerID="dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" exitCode=0 Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.205752 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.206673 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881aa943-ed5c-4d96-aa9e-3942b76d8e1a","Type":"ContainerDied","Data":"1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.206686 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f750dcbb262ca99ffa11d9f66cd78a9dd17c3af6bc8414778962cc8b0d43a40" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.209435 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.230866 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292337 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292527 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292614 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292660 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292765 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.292797 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") pod \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\" (UID: \"881aa943-ed5c-4d96-aa9e-3942b76d8e1a\") " Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.294102 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.299516 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.304078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts" (OuterVolumeSpecName: "scripts") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.304253 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7" (OuterVolumeSpecName: "kube-api-access-dw6k7") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "kube-api-access-dw6k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.355888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395823 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6k7\" (UniqueName: \"kubernetes.io/projected/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-kube-api-access-dw6k7\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395869 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395879 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395887 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.395896 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.412435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.469678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data" (OuterVolumeSpecName: "config-data") pod "881aa943-ed5c-4d96-aa9e-3942b76d8e1a" (UID: "881aa943-ed5c-4d96-aa9e-3942b76d8e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.471914 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.471964 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.484651 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.484708 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.501463 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.501498 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881aa943-ed5c-4d96-aa9e-3942b76d8e1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.545037 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.741379 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.770300 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.770353 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:12 crc kubenswrapper[4705]: I0216 15:17:12.936540 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.024457 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.024698 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" containerID="cri-o://2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" gracePeriod=10 Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.250423 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerID="2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" exitCode=0 Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.250932 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a"} Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.251015 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.341352 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.405260 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.452608 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467758 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467777 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467791 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467797 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467814 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: E0216 15:17:13.467835 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.467842 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468121 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-central-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468158 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="ceilometer-notification-agent" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468173 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="sg-core" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.468184 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.470751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.473631 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.478428 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.481870 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551473 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551537 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551620 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551720 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551794 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.551831 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.555660 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.555697 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.653991 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654135 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654190 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654245 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654319 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654401 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.654912 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.655035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.661283 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.661756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.665057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.677260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.678139 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " pod="openstack/ceilometer-0" Feb 16 15:17:13 crc kubenswrapper[4705]: I0216 15:17:13.805120 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.464045 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" path="/var/lib/kubelet/pods/881aa943-ed5c-4d96-aa9e-3942b76d8e1a/volumes" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.481851 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601147 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601207 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601403 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601429 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.601453 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") pod \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\" (UID: \"6f14f59b-5faf-48e0-bbdc-7f97c3836a35\") " Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.604358 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.620644 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56" (OuterVolumeSpecName: "kube-api-access-lkr56") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "kube-api-access-lkr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.690502 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705555 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705589 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkr56\" (UniqueName: \"kubernetes.io/projected/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-kube-api-access-lkr56\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.705842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.736908 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config" (OuterVolumeSpecName: "config") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.750923 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.766195 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f14f59b-5faf-48e0-bbdc-7f97c3836a35" (UID: "6f14f59b-5faf-48e0-bbdc-7f97c3836a35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.808966 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809016 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809033 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:14 crc kubenswrapper[4705]: I0216 15:17:14.809048 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f14f59b-5faf-48e0-bbdc-7f97c3836a35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.361926 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.369917 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" event={"ID":"6f14f59b-5faf-48e0-bbdc-7f97c3836a35","Type":"ContainerDied","Data":"6acd1944658746507adf3b4af992bae06e651f8bf8b1f5ec60b84795bec2d1f1"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.370021 4705 scope.go:117] "RemoveContainer" containerID="2b2c7f5ac108f1a28b51646f3261bd0600fde3c58221d5733c1cb4d19e39339a" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.370232 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-zg26f" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.374452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"4259daba5069f1ad1d3855f14ca5d403733a2ff26df6d21b1e554e1a1f3397e0"} Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.420515 4705 scope.go:117] "RemoveContainer" containerID="af7fbc84522ccf5649bb0a370c37dac7dd268bfbb7ce51833545d0053cd05d20" Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.446584 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:15 crc kubenswrapper[4705]: I0216 15:17:15.459352 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-zg26f"] Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.309137 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.410296 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.413778 4705 generic.go:334] "Generic (PLEG): container finished" podID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerID="5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306" exitCode=0 Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.413849 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerDied","Data":"5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306"} Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.437315 4705 generic.go:334] "Generic (PLEG): container finished" podID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerID="014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb" exitCode=0 Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.451636 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" path="/var/lib/kubelet/pods/6f14f59b-5faf-48e0-bbdc-7f97c3836a35/volumes" Feb 16 15:17:16 crc kubenswrapper[4705]: I0216 15:17:16.452452 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerDied","Data":"014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.464587 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472237 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerStarted","Data":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472556 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" containerID="cri-o://e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472640 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" containerID="cri-o://b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472622 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" containerID="cri-o://1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.472669 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" containerID="cri-o://ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" gracePeriod=30 Feb 16 15:17:17 crc kubenswrapper[4705]: I0216 15:17:17.525956 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.6027582750000002 podStartE2EDuration="16.525931432s" podCreationTimestamp="2026-02-16 15:17:01 +0000 UTC" firstStartedPulling="2026-02-16 15:17:03.561642125 +0000 UTC m=+1417.746619201" lastFinishedPulling="2026-02-16 15:17:16.484815282 +0000 UTC m=+1430.669792358" observedRunningTime="2026-02-16 15:17:17.494834506 +0000 UTC m=+1431.679811602" watchObservedRunningTime="2026-02-16 15:17:17.525931432 +0000 UTC m=+1431.710908508" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.299472 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.304653 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.379783 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380076 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380334 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380832 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380892 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") pod \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\" (UID: \"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380969 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.380999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") pod \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\" (UID: \"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993\") " Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.421016 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts" (OuterVolumeSpecName: "scripts") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.421058 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw" (OuterVolumeSpecName: "kube-api-access-jx8mw") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "kube-api-access-jx8mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.423215 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5" (OuterVolumeSpecName: "kube-api-access-lfrg5") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "kube-api-access-lfrg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.443576 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts" (OuterVolumeSpecName: "scripts") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509118 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx8mw\" (UniqueName: \"kubernetes.io/projected/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-kube-api-access-jx8mw\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509163 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrg5\" (UniqueName: \"kubernetes.io/projected/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-kube-api-access-lfrg5\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509173 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.509182 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.538738 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.540528 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data" (OuterVolumeSpecName: "config-data") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c29kz" event={"ID":"b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8","Type":"ContainerDied","Data":"f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543805 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cba0996283d3a30785b20c2b5138e18d1243d50932f93f9ed341cdfd481c88" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.543968 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c29kz" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.576018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data" (OuterVolumeSpecName: "config-data") pod "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" (UID: "b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585698 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585741 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585750 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585832 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585864 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.585876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.586024 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" (UID: "b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.597572 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598131 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v8zp2" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598204 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598218 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598244 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598252 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598265 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598271 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: E0216 15:17:18.598304 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="init" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.598312 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="init" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604363 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" containerName="nova-cell1-conductor-db-sync" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604457 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f14f59b-5faf-48e0-bbdc-7f97c3836a35" containerName="dnsmasq-dns" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.604492 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" containerName="nova-manage" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606092 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v8zp2" event={"ID":"b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993","Type":"ContainerDied","Data":"21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606137 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21670372d25daf481fb0e0c8cb90e3d0d283f8f3d303d189ab66dd063244da1d" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.606245 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.616263 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626059 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626121 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626132 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.626142 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.707361 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729629 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729782 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.729908 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.762983 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.763281 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" containerID="cri-o://9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789333 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789633 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" containerID="cri-o://68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.789786 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" containerID="cri-o://5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.834549 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.843665 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.845858 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.846168 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.860312 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.864191 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mrm\" (UniqueName: \"kubernetes.io/projected/53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d-kube-api-access-w8mrm\") pod \"nova-cell1-conductor-0\" (UID: \"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:18 crc kubenswrapper[4705]: I0216 15:17:18.936603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.535040 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.639180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d","Type":"ContainerStarted","Data":"b1ad318ec09dd1620386968e6fa2b491069c7d48c8f5fd9f5f0d017edb59be8d"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.669502 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerStarted","Data":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.669765 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" containerID="cri-o://f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.670168 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671494 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" containerID="cri-o://3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671578 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" containerID="cri-o://8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.671740 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" containerID="cri-o://a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" gracePeriod=30 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.683880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.683884 4705 generic.go:334] "Generic (PLEG): container finished" podID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" exitCode=143 Feb 16 15:17:19 crc kubenswrapper[4705]: I0216 15:17:19.705851 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.449539698 podStartE2EDuration="6.705826561s" podCreationTimestamp="2026-02-16 15:17:13 +0000 UTC" firstStartedPulling="2026-02-16 15:17:14.621899422 +0000 UTC m=+1428.806876498" lastFinishedPulling="2026-02-16 15:17:18.878186285 +0000 UTC m=+1433.063163361" observedRunningTime="2026-02-16 15:17:19.692824775 +0000 UTC m=+1433.877801851" watchObservedRunningTime="2026-02-16 15:17:19.705826561 +0000 UTC m=+1433.890803637" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.537287 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699496 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699536 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" exitCode=2 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699545 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.699680 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701656 4705 generic.go:334] "Generic (PLEG): container finished" podID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" exitCode=0 Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701745 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerDied","Data":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701788 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac","Type":"ContainerDied","Data":"fc7c9ea585cc1fde92feb6b64f7c9647742d877ff5656a5cd26ed4a40b9bc589"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701815 4705 scope.go:117] "RemoveContainer" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.701956 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.705292 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d","Type":"ContainerStarted","Data":"d50eec337da913870a7bb170ef7c4121a92ee2d0dbee040bfc9c39f0b41bb21a"} Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.705429 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.714836 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.714934 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.715067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") pod \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\" (UID: \"93c8ffdb-1ace-4ecc-8d85-10fcfea504ac\") " Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.735945 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg" (OuterVolumeSpecName: "kube-api-access-4vrqg") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "kube-api-access-4vrqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.740801 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.740783187 podStartE2EDuration="2.740783187s" podCreationTimestamp="2026-02-16 15:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:20.730439786 +0000 UTC m=+1434.915416862" watchObservedRunningTime="2026-02-16 15:17:20.740783187 +0000 UTC m=+1434.925760263" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.742643 4705 scope.go:117] "RemoveContainer" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: E0216 15:17:20.743225 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": container with ID starting with 9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea not found: ID does not exist" containerID="9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.743261 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea"} err="failed to get container status \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": rpc error: code = NotFound desc = could not find container \"9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea\": container with ID starting with 9a15ae75e1f017b50c7e2383f115114fb9b73b94406db6efd8943d38831999ea not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.764771 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data" (OuterVolumeSpecName: "config-data") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.772381 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" (UID: "93c8ffdb-1ace-4ecc-8d85-10fcfea504ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818508 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vrqg\" (UniqueName: \"kubernetes.io/projected/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-kube-api-access-4vrqg\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818546 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:20 crc kubenswrapper[4705]: I0216 15:17:20.818561 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.100929 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.112699 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.129969 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: E0216 15:17:21.130613 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.130632 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.130899 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" containerName="nova-scheduler-scheduler" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.131839 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.134225 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.148792 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238346 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238448 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.238727 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.341501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.342066 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.342185 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.355610 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.359067 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.379149 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"nova-scheduler-0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: I0216 15:17:21.459151 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:17:21 crc kubenswrapper[4705]: E0216 15:17:21.944522 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd192f950_fab8_43a1_828b_4bc1613acb4f.slice/crio-5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.059120 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: W0216 15:17:22.079948 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24dafc8c_fbe7_45cc_9558_fad23223b4d0.slice/crio-3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814 WatchSource:0}: Error finding container 3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814: Status 404 returned error can't find the container with id 3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814 Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.440503 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c8ffdb-1ace-4ecc-8d85-10fcfea504ac" path="/var/lib/kubelet/pods/93c8ffdb-1ace-4ecc-8d85-10fcfea504ac/volumes" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.545807 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.693890 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694485 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs" (OuterVolumeSpecName: "logs") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694658 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.694706 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") pod \"d192f950-fab8-43a1-828b-4bc1613acb4f\" (UID: \"d192f950-fab8-43a1-828b-4bc1613acb4f\") " Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.695503 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d192f950-fab8-43a1-828b-4bc1613acb4f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.713852 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44" (OuterVolumeSpecName: "kube-api-access-rbn44") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "kube-api-access-rbn44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.730819 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data" (OuterVolumeSpecName: "config-data") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.744952 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d192f950-fab8-43a1-828b-4bc1613acb4f" (UID: "d192f950-fab8-43a1-828b-4bc1613acb4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.758977 4705 generic.go:334] "Generic (PLEG): container finished" podID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" exitCode=0 Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759058 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759058 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759158 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d192f950-fab8-43a1-828b-4bc1613acb4f","Type":"ContainerDied","Data":"1536a95ab5596e441f283dcccf66e85b779a0237afc5c6e0d01652df6f0e34b4"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.759186 4705 scope.go:117] "RemoveContainer" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.761107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerStarted","Data":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.761155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerStarted","Data":"3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814"} Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798691 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798726 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d192f950-fab8-43a1-828b-4bc1613acb4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.798738 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbn44\" (UniqueName: \"kubernetes.io/projected/d192f950-fab8-43a1-828b-4bc1613acb4f-kube-api-access-rbn44\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.804141 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8041101419999999 podStartE2EDuration="1.804110142s" podCreationTimestamp="2026-02-16 15:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:22.788550304 +0000 UTC m=+1436.973527380" watchObservedRunningTime="2026-02-16 15:17:22.804110142 +0000 UTC m=+1436.989087218" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.831449 4705 scope.go:117] "RemoveContainer" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.845344 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.870540 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.885219 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.886081 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886109 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.886168 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886178 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886512 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-api" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.886543 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" containerName="nova-api-log" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.888315 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.888932 4705 scope.go:117] "RemoveContainer" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.889594 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": container with ID starting with 5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742 not found: ID does not exist" containerID="5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.889635 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742"} err="failed to get container status \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": rpc error: code = NotFound desc = could not find container \"5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742\": container with ID starting with 5c2be32658e6089c5f67e3524994dae067b966a3bd48e35b6275eb2bf6318742 not found: ID does not exist" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.889669 4705 scope.go:117] "RemoveContainer" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: E0216 15:17:22.890366 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": container with ID starting with 68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d not found: ID does not exist" containerID="68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.890417 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d"} err="failed to get container status \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": rpc error: code = NotFound desc = could not find container \"68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d\": container with ID starting with 68b73237bb577d307ade1d14ddb04d865c22095cc978a80662e09dc6dba03c6d not found: ID does not exist" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.891483 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:22 crc kubenswrapper[4705]: I0216 15:17:22.901042 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006402 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.006605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109692 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109766 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109872 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.109906 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.111000 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.118030 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.118115 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.133686 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"nova-api-0\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.215981 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:23 crc kubenswrapper[4705]: I0216 15:17:23.764798 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.433707 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192f950-fab8-43a1-828b-4bc1613acb4f" path="/var/lib/kubelet/pods/d192f950-fab8-43a1-828b-4bc1613acb4f/volumes" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.784283 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.805917 4705 generic.go:334] "Generic (PLEG): container finished" podID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" exitCode=0 Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806012 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806047 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1c48521-25a2-4bd8-be3f-ad6da6409486","Type":"ContainerDied","Data":"4259daba5069f1ad1d3855f14ca5d403733a2ff26df6d21b1e554e1a1f3397e0"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806067 4705 scope.go:117] "RemoveContainer" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.806148 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.809242 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerStarted","Data":"b9e16d20a34c818b351cfcb18e6ae185d36b1c587820242a2f7a8a4d81bd9408"} Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.854598 4705 scope.go:117] "RemoveContainer" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.880994 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.88097043 podStartE2EDuration="2.88097043s" podCreationTimestamp="2026-02-16 15:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:24.865230276 +0000 UTC m=+1439.050207362" watchObservedRunningTime="2026-02-16 15:17:24.88097043 +0000 UTC m=+1439.065947516" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.883812 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884020 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884158 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884550 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884605 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.884671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") pod \"f1c48521-25a2-4bd8-be3f-ad6da6409486\" (UID: \"f1c48521-25a2-4bd8-be3f-ad6da6409486\") " Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.886479 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.887040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.904513 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts" (OuterVolumeSpecName: "scripts") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.904511 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx" (OuterVolumeSpecName: "kube-api-access-rs7dx") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "kube-api-access-rs7dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.921218 4705 scope.go:117] "RemoveContainer" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.957981 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.958602 4705 scope.go:117] "RemoveContainer" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.987712 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988287 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988303 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988347 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1c48521-25a2-4bd8-be3f-ad6da6409486-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988362 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs7dx\" (UniqueName: \"kubernetes.io/projected/f1c48521-25a2-4bd8-be3f-ad6da6409486-kube-api-access-rs7dx\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.988423 4705 scope.go:117] "RemoveContainer" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.989035 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": container with ID starting with a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071 not found: ID does not exist" containerID="a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989109 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071"} err="failed to get container status \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": rpc error: code = NotFound desc = could not find container \"a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071\": container with ID starting with a9c00709ac445d0c1a41796d5b4d04e57efffa9ce829aaa4bd1fbbe838d5f071 not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989166 4705 scope.go:117] "RemoveContainer" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.989622 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": container with ID starting with 8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f not found: ID does not exist" containerID="8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f"} err="failed to get container status \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": rpc error: code = NotFound desc = could not find container \"8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f\": container with ID starting with 8c685f7487ab0d47b27e168bcc0e1ba6d6b3dae0876420fddaeb261ec1be463f not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.989678 4705 scope.go:117] "RemoveContainer" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.990003 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": container with ID starting with 3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a not found: ID does not exist" containerID="3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990033 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a"} err="failed to get container status \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": rpc error: code = NotFound desc = could not find container \"3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a\": container with ID starting with 3a69e5eed7c1fbc310f693ff0d5b66f15900c0504e7471fdbd56c0d72e57e93a not found: ID does not exist" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990071 4705 scope.go:117] "RemoveContainer" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: E0216 15:17:24.990315 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": container with ID starting with f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f not found: ID does not exist" containerID="f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f" Feb 16 15:17:24 crc kubenswrapper[4705]: I0216 15:17:24.990356 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f"} err="failed to get container status \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": rpc error: code = NotFound desc = could not find container \"f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f\": container with ID starting with f8577935fb728e56d839d0b99655986bf76203626a3da640aaaa8e2a54d6e06f not found: ID does not exist" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.007206 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.070907 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data" (OuterVolumeSpecName: "config-data") pod "f1c48521-25a2-4bd8-be3f-ad6da6409486" (UID: "f1c48521-25a2-4bd8-be3f-ad6da6409486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.090020 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.090056 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c48521-25a2-4bd8-be3f-ad6da6409486-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.190695 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.206276 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.262856 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263612 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263631 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263644 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263652 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263684 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263691 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: E0216 15:17:25.263714 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263720 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263961 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-notification-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263982 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="sg-core" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.263999 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="ceilometer-central-agent" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.264005 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" containerName="proxy-httpd" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.266437 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.279577 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.279645 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.304579 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.408864 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409029 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409107 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409132 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.409219 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512127 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512195 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512218 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512237 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512267 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.512284 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.513655 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.517822 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.517963 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.518073 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.526032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.531842 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.532838 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"ceilometer-0\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " pod="openstack/ceilometer-0" Feb 16 15:17:25 crc kubenswrapper[4705]: I0216 15:17:25.606733 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.094017 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.460790 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c48521-25a2-4bd8-be3f-ad6da6409486" path="/var/lib/kubelet/pods/f1c48521-25a2-4bd8-be3f-ad6da6409486/volumes" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.462953 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.840550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82"} Feb 16 15:17:26 crc kubenswrapper[4705]: I0216 15:17:26.841357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39"} Feb 16 15:17:27 crc kubenswrapper[4705]: I0216 15:17:27.856515 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2"} Feb 16 15:17:28 crc kubenswrapper[4705]: I0216 15:17:28.880346 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a"} Feb 16 15:17:28 crc kubenswrapper[4705]: I0216 15:17:28.997057 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.893199 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerStarted","Data":"f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c"} Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.893873 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:29 crc kubenswrapper[4705]: I0216 15:17:29.926319 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5737448870000001 podStartE2EDuration="4.926301601s" podCreationTimestamp="2026-02-16 15:17:25 +0000 UTC" firstStartedPulling="2026-02-16 15:17:26.101689418 +0000 UTC m=+1440.286666494" lastFinishedPulling="2026-02-16 15:17:29.454246132 +0000 UTC m=+1443.639223208" observedRunningTime="2026-02-16 15:17:29.920836347 +0000 UTC m=+1444.105813443" watchObservedRunningTime="2026-02-16 15:17:29.926301601 +0000 UTC m=+1444.111278677" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.460422 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.525194 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:17:31 crc kubenswrapper[4705]: I0216 15:17:31.971761 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:17:33 crc kubenswrapper[4705]: I0216 15:17:33.216679 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:33 crc kubenswrapper[4705]: I0216 15:17:33.217137 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:17:34 crc kubenswrapper[4705]: I0216 15:17:34.307640 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:34 crc kubenswrapper[4705]: I0216 15:17:34.307631 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.800411 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.812143 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.893859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894014 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894212 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894287 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894405 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894515 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") pod \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\" (UID: \"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.894860 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") pod \"c403fb44-6250-449b-b257-953b925c635a\" (UID: \"c403fb44-6250-449b-b257-953b925c635a\") " Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.898900 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs" (OuterVolumeSpecName: "logs") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.904255 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h" (OuterVolumeSpecName: "kube-api-access-gmx7h") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "kube-api-access-gmx7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.907236 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h" (OuterVolumeSpecName: "kube-api-access-6245h") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "kube-api-access-6245h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.936520 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data" (OuterVolumeSpecName: "config-data") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.937832 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.947194 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c403fb44-6250-449b-b257-953b925c635a" (UID: "c403fb44-6250-449b-b257-953b925c635a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:40 crc kubenswrapper[4705]: I0216 15:17:40.954149 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data" (OuterVolumeSpecName: "config-data") pod "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" (UID: "bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6245h\" (UniqueName: \"kubernetes.io/projected/c403fb44-6250-449b-b257-953b925c635a-kube-api-access-6245h\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000160 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000183 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000203 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c403fb44-6250-449b-b257-953b925c635a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000224 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmx7h\" (UniqueName: \"kubernetes.io/projected/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-kube-api-access-gmx7h\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000244 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c403fb44-6250-449b-b257-953b925c635a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.000265 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114195 4705 generic.go:334] "Generic (PLEG): container finished" podID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" exitCode=137 Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114524 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerDied","Data":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.115028 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c","Type":"ContainerDied","Data":"2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.114635 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.115076 4705 scope.go:117] "RemoveContainer" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118061 4705 generic.go:334] "Generic (PLEG): container finished" podID="c403fb44-6250-449b-b257-953b925c635a" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" exitCode=137 Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118127 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118174 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c403fb44-6250-449b-b257-953b925c635a","Type":"ContainerDied","Data":"dee0ea11222770d7565040c2a8d452d725637a688407fbd260ff2426c890c0e6"} Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.118282 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.175683 4705 scope.go:117] "RemoveContainer" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.185810 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": container with ID starting with d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c not found: ID does not exist" containerID="d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.185868 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c"} err="failed to get container status \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": rpc error: code = NotFound desc = could not find container \"d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c\": container with ID starting with d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.185903 4705 scope.go:117] "RemoveContainer" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.206811 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.242069 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.255146 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.266905 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.274646 4705 scope.go:117] "RemoveContainer" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.290574 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292239 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292263 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292298 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292310 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.292408 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.292422 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296006 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296053 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-log" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.296107 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c403fb44-6250-449b-b257-953b925c635a" containerName="nova-metadata-metadata" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.298014 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.301914 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.304573 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.304905 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315300 4705 scope.go:117] "RemoveContainer" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315466 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.315936 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": container with ID starting with a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51 not found: ID does not exist" containerID="a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.315973 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51"} err="failed to get container status \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": rpc error: code = NotFound desc = could not find container \"a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51\": container with ID starting with a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51 not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.316003 4705 scope.go:117] "RemoveContainer" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: E0216 15:17:41.316580 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": container with ID starting with 4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035 not found: ID does not exist" containerID="4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.316663 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035"} err="failed to get container status \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": rpc error: code = NotFound desc = could not find container \"4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035\": container with ID starting with 4361eb614ba96c5e0ff8efdb0cad2211ffdd1e8209cc9a717f3f8a6486b10035 not found: ID does not exist" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.334544 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.344342 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.347234 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.348098 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.360029 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.419523 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.420813 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.420884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421067 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421275 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421314 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421508 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421613 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.421792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524232 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524364 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524450 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524473 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524490 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524518 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524566 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524662 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.524722 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.525823 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532341 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532527 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.533237 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.532303 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.533980 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.535969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.536074 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49f6329-2396-4d3e-9b28-2dd3586b1965-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.544571 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmgv\" (UniqueName: \"kubernetes.io/projected/b49f6329-2396-4d3e-9b28-2dd3586b1965-kube-api-access-zjmgv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b49f6329-2396-4d3e-9b28-2dd3586b1965\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.546616 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"nova-metadata-0\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " pod="openstack/nova-metadata-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.640205 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:41 crc kubenswrapper[4705]: I0216 15:17:41.664191 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.139252 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="881aa943-ed5c-4d96-aa9e-3942b76d8e1a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.193836 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.205860 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.437315 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c" path="/var/lib/kubelet/pods/bd6ad68b-9e76-4c9d-ad39-6377b4b51f4c/volumes" Feb 16 15:17:42 crc kubenswrapper[4705]: I0216 15:17:42.438181 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c403fb44-6250-449b-b257-953b925c635a" path="/var/lib/kubelet/pods/c403fb44-6250-449b-b257-953b925c635a/volumes" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.155856 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b49f6329-2396-4d3e-9b28-2dd3586b1965","Type":"ContainerStarted","Data":"7e7475ab313e465395ff2e16f5d62cebf15b40dcf04162ce2e50542d92f6cb80"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.156586 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b49f6329-2396-4d3e-9b28-2dd3586b1965","Type":"ContainerStarted","Data":"ae104e1efd72f98ec608627f695ab716a8c8a1949b6a9a044342387b09347f55"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157389 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157439 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.157450 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerStarted","Data":"eeec271298f4dcb2eb43a0a1c49fcdad72fcc161271d50f3ad69a11322b20f9c"} Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.195061 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.195019172 podStartE2EDuration="2.195019172s" podCreationTimestamp="2026-02-16 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:43.181621235 +0000 UTC m=+1457.366598351" watchObservedRunningTime="2026-02-16 15:17:43.195019172 +0000 UTC m=+1457.379996258" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.222792 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.225394 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.225838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.230317 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:43 crc kubenswrapper[4705]: I0216 15:17:43.258986 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.258957283 podStartE2EDuration="2.258957283s" podCreationTimestamp="2026-02-16 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:43.207397631 +0000 UTC m=+1457.392374717" watchObservedRunningTime="2026-02-16 15:17:43.258957283 +0000 UTC m=+1457.443934359" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.175753 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.185204 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.470432 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.473764 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.508129 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657194 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657325 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657606 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.657988 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.658099 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760501 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760820 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760891 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.760967 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.761003 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762134 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762150 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.762878 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.788845 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"dnsmasq-dns-6b7bbf7cf9-t6qzx\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:44 crc kubenswrapper[4705]: I0216 15:17:44.821011 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:45 crc kubenswrapper[4705]: I0216 15:17:45.407535 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.212886 4705 generic.go:334] "Generic (PLEG): container finished" podID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerID="6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d" exitCode=0 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.213052 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d"} Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.213662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerStarted","Data":"fd288e684e0a43e4b376cb33683431b8af354b638eab9d3f39fe75d11b79e614"} Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.640390 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.649220 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.649529 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" containerID="cri-o://18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651169 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" containerID="cri-o://e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651234 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" containerID="cri-o://5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.651406 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" containerID="cri-o://f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" gracePeriod=30 Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.663430 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.252:3000/\": EOF" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.664521 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:46 crc kubenswrapper[4705]: I0216 15:17:46.664622 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.209769 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.266911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerStarted","Data":"44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.268063 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.277621 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.276938 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" exitCode=0 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.279750 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" exitCode=2 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.279821 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a"} Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.280351 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" containerID="cri-o://66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" gracePeriod=30 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.280431 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" containerID="cri-o://3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" gracePeriod=30 Feb 16 15:17:47 crc kubenswrapper[4705]: I0216 15:17:47.306765 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" podStartSLOduration=3.306736153 podStartE2EDuration="3.306736153s" podCreationTimestamp="2026-02-16 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:47.295488606 +0000 UTC m=+1461.480465692" watchObservedRunningTime="2026-02-16 15:17:47.306736153 +0000 UTC m=+1461.491713229" Feb 16 15:17:47 crc kubenswrapper[4705]: E0216 15:17:47.794659 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda482712d_42ed_49b1_b0eb_fb1cf899f3db.slice/crio-66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc403fb44_6250_449b_b257_953b925c635a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda482712d_42ed_49b1_b0eb_fb1cf899f3db.slice/crio-conmon-66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc403fb44_6250_449b_b257_953b925c635a.slice/crio-conmon-a8f802532b8a75cbc07d3b40e5cf64a818c2b6f172a77afb206e9a4edc830c51.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-conmon-d5774fd1f0dee796bc51e5d2aec6ef51143a1648379c51cae16a172e2264634c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice/crio-2c7d553310530035d6f4243d4ec8d424a9dbcb3e3927033f1971bef339bd967f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-conmon-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad68b_9e76_4c9d_ad39_6377b4b51f4c.slice\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:47 crc kubenswrapper[4705]: E0216 15:17:47.795272 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3bb879_c0d5_4b09_a454_034daa93ab77.slice/crio-conmon-1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.192118 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.295249 4705 generic.go:334] "Generic (PLEG): container finished" podID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" exitCode=143 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.295305 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.298250 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" exitCode=0 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.298272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.303716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.303998 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.304044 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.304080 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") pod \"6d3bb879-c0d5-4b09-a454-034daa93ab77\" (UID: \"6d3bb879-c0d5-4b09-a454-034daa93ab77\") " Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.311748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl" (OuterVolumeSpecName: "kube-api-access-fgfgl") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "kube-api-access-fgfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.312994 4705 generic.go:334] "Generic (PLEG): container finished" podID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" exitCode=137 Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313080 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313189 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6d3bb879-c0d5-4b09-a454-034daa93ab77","Type":"ContainerDied","Data":"107badcc630ad4f6903ae7ffcd033ff5a892847e00104684492ac9a7124f1280"} Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.313218 4705 scope.go:117] "RemoveContainer" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.314614 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts" (OuterVolumeSpecName: "scripts") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.407439 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgfgl\" (UniqueName: \"kubernetes.io/projected/6d3bb879-c0d5-4b09-a454-034daa93ab77-kube-api-access-fgfgl\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.407491 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.477750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.485423 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data" (OuterVolumeSpecName: "config-data") pod "6d3bb879-c0d5-4b09-a454-034daa93ab77" (UID: "6d3bb879-c0d5-4b09-a454-034daa93ab77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.510826 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.510895 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3bb879-c0d5-4b09-a454-034daa93ab77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.550361 4705 scope.go:117] "RemoveContainer" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.583768 4705 scope.go:117] "RemoveContainer" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.613470 4705 scope.go:117] "RemoveContainer" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.663358 4705 scope.go:117] "RemoveContainer" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.663988 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": container with ID starting with 1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1 not found: ID does not exist" containerID="1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.664044 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1"} err="failed to get container status \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": rpc error: code = NotFound desc = could not find container \"1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1\": container with ID starting with 1e61b33613fd710ba8e04275895acce8c86a7761be22c84a2abb343cee22abc1 not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.664079 4705 scope.go:117] "RemoveContainer" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.667673 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": container with ID starting with ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c not found: ID does not exist" containerID="ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.667718 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c"} err="failed to get container status \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": rpc error: code = NotFound desc = could not find container \"ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c\": container with ID starting with ba457a730f07117040670068640b94a403f69a0ab2818a8f55b5c56e857d7f7c not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.667753 4705 scope.go:117] "RemoveContainer" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.668039 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": container with ID starting with b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f not found: ID does not exist" containerID="b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.668062 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f"} err="failed to get container status \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": rpc error: code = NotFound desc = could not find container \"b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f\": container with ID starting with b29b0d290ef76e8f94ec36a3ec1b14ccd0a94410aa18d7f0f89e3512d6f8603f not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.668079 4705 scope.go:117] "RemoveContainer" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.669187 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": container with ID starting with e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f not found: ID does not exist" containerID="e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.669211 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f"} err="failed to get container status \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": rpc error: code = NotFound desc = could not find container \"e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f\": container with ID starting with e55302900f1a8714ecae756834d7cec721f74bc6a5487a6b4da4617d1422915f not found: ID does not exist" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.675579 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.709363 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.752715 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.754951 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.754982 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755016 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755024 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755067 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755075 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: E0216 15:17:48.755133 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.755140 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756439 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-listener" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756479 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-notifier" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756535 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-api" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.756564 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" containerName="aodh-evaluator" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.765282 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.772048 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-l4hnj" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.773722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774740 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774916 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.774969 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.811511 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.926962 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927044 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927273 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927459 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:48 crc kubenswrapper[4705]: I0216 15:17:48.927499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030499 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030697 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030834 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030880 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.030951 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.036158 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-public-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.037101 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-scripts\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.037941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-internal-tls-certs\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.041057 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.055490 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb1d6b3-1208-4339-9d67-330c02618823-config-data\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.072008 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wkf\" (UniqueName: \"kubernetes.io/projected/8bb1d6b3-1208-4339-9d67-330c02618823-kube-api-access-k2wkf\") pod \"aodh-0\" (UID: \"8bb1d6b3-1208-4339-9d67-330c02618823\") " pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.100132 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.611470 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.615502 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.625744 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.695730 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 16 15:17:49 crc kubenswrapper[4705]: W0216 15:17:49.705642 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb1d6b3_1208_4339_9d67_330c02618823.slice/crio-eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0 WatchSource:0}: Error finding container eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0: Status 404 returned error can't find the container with id eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0 Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.752302 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.752391 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.753186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856563 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.856716 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.857471 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.857595 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.881091 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"redhat-operators-6jtvt\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:49 crc kubenswrapper[4705]: I0216 15:17:49.948562 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370311 4705 generic.go:334] "Generic (PLEG): container finished" podID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerID="5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" exitCode=0 Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370436 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8","Type":"ContainerDied","Data":"7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.370897 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af76ea020969884cac1afc67b6c684eaea556dc0d059bdf3f133791959a3f39" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.373161 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"eea1bd39ec7adf10785dde83cb2f67f0bb6b68295e9b1a2762fbf28d2e2a29b0"} Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.408387 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.446712 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3bb879-c0d5-4b09-a454-034daa93ab77" path="/var/lib/kubelet/pods/6d3bb879-c0d5-4b09-a454-034daa93ab77/volumes" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.578645 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580689 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580761 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580883 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580935 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.580999 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.581040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") pod \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\" (UID: \"978ccf0a-1d2e-4f4d-8ffc-466635f19ae8\") " Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.582930 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.585827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.588133 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts" (OuterVolumeSpecName: "scripts") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.593079 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq" (OuterVolumeSpecName: "kube-api-access-lhffq") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "kube-api-access-lhffq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.657798 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684498 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684539 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684550 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhffq\" (UniqueName: \"kubernetes.io/projected/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-kube-api-access-lhffq\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684564 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.684574 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.715644 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.765657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data" (OuterVolumeSpecName: "config-data") pod "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" (UID: "978ccf0a-1d2e-4f4d-8ffc-466635f19ae8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.786797 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.786829 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:50 crc kubenswrapper[4705]: I0216 15:17:50.964415 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.094537 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.094721 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.095100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.095231 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") pod \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\" (UID: \"a482712d-42ed-49b1-b0eb-fb1cf899f3db\") " Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.096522 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs" (OuterVolumeSpecName: "logs") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.117750 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv" (OuterVolumeSpecName: "kube-api-access-ntxgv") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "kube-api-access-ntxgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.155518 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data" (OuterVolumeSpecName: "config-data") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.164475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a482712d-42ed-49b1-b0eb-fb1cf899f3db" (UID: "a482712d-42ed-49b1-b0eb-fb1cf899f3db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.198832 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199090 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a482712d-42ed-49b1-b0eb-fb1cf899f3db-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199193 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxgv\" (UniqueName: \"kubernetes.io/projected/a482712d-42ed-49b1-b0eb-fb1cf899f3db-kube-api-access-ntxgv\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.199261 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a482712d-42ed-49b1-b0eb-fb1cf899f3db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390831 4705 generic.go:334] "Generic (PLEG): container finished" podID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" exitCode=0 Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390913 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.390964 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.391482 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a482712d-42ed-49b1-b0eb-fb1cf899f3db","Type":"ContainerDied","Data":"b9e16d20a34c818b351cfcb18e6ae185d36b1c587820242a2f7a8a4d81bd9408"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.391510 4705 scope.go:117] "RemoveContainer" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.397285 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"9d7d987f3057f6bfbf32a6e31f06eb31f7c7ba3db80a5d117b8e149f9352a0e4"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.397562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"4cb2380dba203b6c9018aaa81811f515ca7fcf6667eb1d9d862b6a3d11f9a192"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402544 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" exitCode=0 Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402685 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.402770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"e1adb33222027cc4f090326df3b9dd77bb0143da9f839682a1a04a68a2f7c1af"} Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.439520 4705 scope.go:117] "RemoveContainer" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.496055 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.522934 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.531870 4705 scope.go:117] "RemoveContainer" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.535311 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": container with ID starting with 3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8 not found: ID does not exist" containerID="3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.535361 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8"} err="failed to get container status \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": rpc error: code = NotFound desc = could not find container \"3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8\": container with ID starting with 3ce4ce6e8183ccefc37fca77ddac73861562a945c4d70ac8727bc00aa1106af8 not found: ID does not exist" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.535409 4705 scope.go:117] "RemoveContainer" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.544554 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": container with ID starting with 66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8 not found: ID does not exist" containerID="66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.544624 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8"} err="failed to get container status \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": rpc error: code = NotFound desc = could not find container \"66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8\": container with ID starting with 66551734e756de3a186ec593f61efe541b14b736ca9db8093e55b60c172943b8 not found: ID does not exist" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.555717 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.573446 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.615683 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620063 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620112 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620253 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620267 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620314 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620336 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620386 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620395 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620418 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620426 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: E0216 15:17:51.620471 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.620480 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621461 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-api" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621491 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" containerName="nova-api-log" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621509 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-central-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621531 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="sg-core" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621552 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="proxy-httpd" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.621567 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" containerName="ceilometer-notification-agent" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.624672 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.631627 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.631715 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.632722 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.640796 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.654236 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.665310 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.665710 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.670144 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.670269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.676141 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.676318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.678279 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.695737 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726656 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726718 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.726756 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727159 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727387 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.727453 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.833590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834070 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834097 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834136 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834186 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834231 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834266 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834394 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834484 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834569 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834600 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834661 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.834710 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.836449 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.843849 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.843906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.857569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.861961 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.863627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"nova-api-0\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.938647 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.939763 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940019 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940186 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940323 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.940515 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.941103 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.945481 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.948260 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.950976 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.953697 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.954305 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.957178 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:51 crc kubenswrapper[4705]: I0216 15:17:51.969974 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"ceilometer-0\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " pod="openstack/ceilometer-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.002702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.479058 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978ccf0a-1d2e-4f4d-8ffc-466635f19ae8" path="/var/lib/kubelet/pods/978ccf0a-1d2e-4f4d-8ffc-466635f19ae8/volumes" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.480903 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a482712d-42ed-49b1-b0eb-fb1cf899f3db" path="/var/lib/kubelet/pods/a482712d-42ed-49b1-b0eb-fb1cf899f3db/volumes" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.482002 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"a6f9dc04b61b9ef3151f79e7b43de5c5e596501dbdf9aa73754333bd3dfe7ac5"} Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.505463 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.692906 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.692941 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.718985 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.722791 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.728023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.728670 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.772232 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.783655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890740 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890791 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.890999 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.900547 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:17:52 crc kubenswrapper[4705]: W0216 15:17:52.919952 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6977ac78_db27_460b_8a38_582c65dbb67b.slice/crio-634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6 WatchSource:0}: Error finding container 634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6: Status 404 returned error can't find the container with id 634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6 Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.996498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.997922 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.998063 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:52 crc kubenswrapper[4705]: I0216 15:17:52.998253 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.012069 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.012986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.015636 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.016260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"nova-cell1-cell-mapping-v596j\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.186195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.513464 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"aeb06779efac7c38585b17cfd3ae6968f2916d9ee186859b6bf4a5e6711bb96e"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.594019 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8bb1d6b3-1208-4339-9d67-330c02618823","Type":"ContainerStarted","Data":"ea6a067c1817b0e280afbb42ee719194207bb37c3d0040c7caa0f8cda7c8399c"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.638234 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.705758 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.705821 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6"} Feb 16 15:17:53 crc kubenswrapper[4705]: I0216 15:17:53.755042 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.683041777 podStartE2EDuration="5.755017647s" podCreationTimestamp="2026-02-16 15:17:48 +0000 UTC" firstStartedPulling="2026-02-16 15:17:49.708261877 +0000 UTC m=+1463.893238943" lastFinishedPulling="2026-02-16 15:17:52.780237727 +0000 UTC m=+1466.965214813" observedRunningTime="2026-02-16 15:17:53.678877402 +0000 UTC m=+1467.863854478" watchObservedRunningTime="2026-02-16 15:17:53.755017647 +0000 UTC m=+1467.939994723" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.240696 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.718590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerStarted","Data":"eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.719046 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerStarted","Data":"e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.722774 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerStarted","Data":"5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.733094 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.733162 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281"} Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.742177 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-v596j" podStartSLOduration=2.742158846 podStartE2EDuration="2.742158846s" podCreationTimestamp="2026-02-16 15:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:54.737039051 +0000 UTC m=+1468.922016127" watchObservedRunningTime="2026-02-16 15:17:54.742158846 +0000 UTC m=+1468.927135922" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.774756 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.7747365630000003 podStartE2EDuration="3.774736563s" podCreationTimestamp="2026-02-16 15:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:54.770050061 +0000 UTC m=+1468.955027137" watchObservedRunningTime="2026-02-16 15:17:54.774736563 +0000 UTC m=+1468.959713629" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.823670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.935577 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:54 crc kubenswrapper[4705]: I0216 15:17:54.936270 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" containerID="cri-o://b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" gracePeriod=10 Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.682799 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722114 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722629 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.722708 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.723043 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.723111 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") pod \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\" (UID: \"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7\") " Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.762851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb" (OuterVolumeSpecName: "kube-api-access-2nxgb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "kube-api-access-2nxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.790525 4705 generic.go:334] "Generic (PLEG): container finished" podID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" exitCode=0 Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792019 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792689 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-hbrjc" event={"ID":"2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7","Type":"ContainerDied","Data":"ede06e3254a42f9f6eec0ac56c7e1b7e4b102971ccf37608944546f6accc4101"} Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.792732 4705 scope.go:117] "RemoveContainer" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.834827 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nxgb\" (UniqueName: \"kubernetes.io/projected/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-kube-api-access-2nxgb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.854666 4705 scope.go:117] "RemoveContainer" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.903927 4705 scope.go:117] "RemoveContainer" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: E0216 15:17:55.904653 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": container with ID starting with b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37 not found: ID does not exist" containerID="b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.905106 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37"} err="failed to get container status \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": rpc error: code = NotFound desc = could not find container \"b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37\": container with ID starting with b2a2cc7507dc7703c650bce7299f187c133818a2efcb971f675fbb4b89535b37 not found: ID does not exist" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.905139 4705 scope.go:117] "RemoveContainer" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: E0216 15:17:55.906484 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": container with ID starting with ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31 not found: ID does not exist" containerID="ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.906514 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31"} err="failed to get container status \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": rpc error: code = NotFound desc = could not find container \"ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31\": container with ID starting with ad0deacd427c41077f43af88e51c1662e432449056a565012936b21d4d2b5f31 not found: ID does not exist" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.910606 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.916199 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config" (OuterVolumeSpecName: "config") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.933581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942327 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942365 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.942376 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.944252 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:55 crc kubenswrapper[4705]: I0216 15:17:55.953938 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" (UID: "2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.045219 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.045272 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.203469 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.221532 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-hbrjc"] Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.437373 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" path="/var/lib/kubelet/pods/2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7/volumes" Feb 16 15:17:56 crc kubenswrapper[4705]: I0216 15:17:56.806769 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0"} Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.841590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerStarted","Data":"a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba"} Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.841793 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:17:57 crc kubenswrapper[4705]: I0216 15:17:57.890738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.394750619 podStartE2EDuration="6.890704403s" podCreationTimestamp="2026-02-16 15:17:51 +0000 UTC" firstStartedPulling="2026-02-16 15:17:52.740262791 +0000 UTC m=+1466.925239867" lastFinishedPulling="2026-02-16 15:17:57.236216565 +0000 UTC m=+1471.421193651" observedRunningTime="2026-02-16 15:17:57.873788386 +0000 UTC m=+1472.058765462" watchObservedRunningTime="2026-02-16 15:17:57.890704403 +0000 UTC m=+1472.075681479" Feb 16 15:17:58 crc kubenswrapper[4705]: I0216 15:17:58.864991 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" exitCode=0 Feb 16 15:17:58 crc kubenswrapper[4705]: I0216 15:17:58.865087 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.881316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerStarted","Data":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.909268 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jtvt" podStartSLOduration=3.034050216 podStartE2EDuration="10.909246878s" podCreationTimestamp="2026-02-16 15:17:49 +0000 UTC" firstStartedPulling="2026-02-16 15:17:51.439403054 +0000 UTC m=+1465.624380130" lastFinishedPulling="2026-02-16 15:17:59.314599696 +0000 UTC m=+1473.499576792" observedRunningTime="2026-02-16 15:17:59.903677701 +0000 UTC m=+1474.088654777" watchObservedRunningTime="2026-02-16 15:17:59.909246878 +0000 UTC m=+1474.094223954" Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.949140 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:17:59 crc kubenswrapper[4705]: I0216 15:17:59.949685 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.006402 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:01 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:01 crc kubenswrapper[4705]: > Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.671019 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.673432 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.677560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.913646 4705 generic.go:334] "Generic (PLEG): container finished" podID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerID="eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c" exitCode=0 Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.913762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerDied","Data":"eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c"} Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.921021 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.952220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:01 crc kubenswrapper[4705]: I0216 15:18:01.952309 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:02 crc kubenswrapper[4705]: I0216 15:18:02.967571 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:02 crc kubenswrapper[4705]: I0216 15:18:02.969206 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.562997 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668532 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668603 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668752 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.668876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") pod \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\" (UID: \"7d98759e-f50f-4b94-bd6a-8cfa1e083675\") " Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.692938 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts" (OuterVolumeSpecName: "scripts") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.693202 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89" (OuterVolumeSpecName: "kube-api-access-4kc89") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "kube-api-access-4kc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.719564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.722705 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data" (OuterVolumeSpecName: "config-data") pod "7d98759e-f50f-4b94-bd6a-8cfa1e083675" (UID: "7d98759e-f50f-4b94-bd6a-8cfa1e083675"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772089 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772122 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772134 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kc89\" (UniqueName: \"kubernetes.io/projected/7d98759e-f50f-4b94-bd6a-8cfa1e083675-kube-api-access-4kc89\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.772146 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d98759e-f50f-4b94-bd6a-8cfa1e083675-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.952410 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-v596j" Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.952848 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-v596j" event={"ID":"7d98759e-f50f-4b94-bd6a-8cfa1e083675","Type":"ContainerDied","Data":"e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb"} Feb 16 15:18:03 crc kubenswrapper[4705]: I0216 15:18:03.953076 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86c3ffaf3eff8a0a9a0fe2e47c66857a352df2ebc46352dbb89be5bca3ba6eb" Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.170844 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.171103 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" containerID="cri-o://48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220070 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220352 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" containerID="cri-o://3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.220911 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" containerID="cri-o://5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.244782 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967224 4705 generic.go:334] "Generic (PLEG): container finished" podID="6977ac78-db27-460b-8a38-582c65dbb67b" containerID="3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" exitCode=143 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967267 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4"} Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967847 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" containerID="cri-o://fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" gracePeriod=30 Feb 16 15:18:04 crc kubenswrapper[4705]: I0216 15:18:04.967897 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" containerID="cri-o://972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" gracePeriod=30 Feb 16 15:18:05 crc kubenswrapper[4705]: I0216 15:18:05.982777 4705 generic.go:334] "Generic (PLEG): container finished" podID="628e6201-a994-4614-9b4d-3f261b718186" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" exitCode=143 Feb 16 15:18:05 crc kubenswrapper[4705]: I0216 15:18:05.983261 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.461056 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.461749 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.462080 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:18:06 crc kubenswrapper[4705]: E0216 15:18:06.462120 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.779489 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894742 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894823 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.894901 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") pod \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\" (UID: \"24dafc8c-fbe7-45cc-9558-fad23223b4d0\") " Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.902178 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk" (OuterVolumeSpecName: "kube-api-access-wsrjk") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "kube-api-access-wsrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.940499 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:06 crc kubenswrapper[4705]: I0216 15:18:06.942442 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data" (OuterVolumeSpecName: "config-data") pod "24dafc8c-fbe7-45cc-9558-fad23223b4d0" (UID: "24dafc8c-fbe7-45cc-9558-fad23223b4d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:06.999923 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsrjk\" (UniqueName: \"kubernetes.io/projected/24dafc8c-fbe7-45cc-9558-fad23223b4d0-kube-api-access-wsrjk\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.000402 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.000423 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dafc8c-fbe7-45cc-9558-fad23223b4d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014004 4705 generic.go:334] "Generic (PLEG): container finished" podID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" exitCode=0 Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014062 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerDied","Data":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014098 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"24dafc8c-fbe7-45cc-9558-fad23223b4d0","Type":"ContainerDied","Data":"3413cd4f1b552ac8085e42f4581ec09733745e00127ced13a53b75b47777a814"} Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014118 4705 scope.go:117] "RemoveContainer" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.014345 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.076763 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.086979 4705 scope.go:117] "RemoveContainer" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.095631 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": container with ID starting with 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 not found: ID does not exist" containerID="48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.095696 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372"} err="failed to get container status \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": rpc error: code = NotFound desc = could not find container \"48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372\": container with ID starting with 48dc94d3753839a803efc41a1ef7a79ef2ec8bced643539cf04d08da58e27372 not found: ID does not exist" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.097470 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.112921 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113598 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="init" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113619 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="init" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113693 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113700 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113718 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113725 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: E0216 15:18:07.113754 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.113760 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114015 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" containerName="nova-scheduler-scheduler" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114037 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca5dd9d-72f8-4c4a-b4ac-81baa7ae90d7" containerName="dnsmasq-dns" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.114053 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" containerName="nova-manage" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.115107 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.117423 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.145874 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.218621 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.219700 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.219912 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323214 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323420 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.323464 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.330565 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.331561 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e0dd7-af17-4240-ab5a-b6c149913841-config-data\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.345010 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6f79\" (UniqueName: \"kubernetes.io/projected/e67e0dd7-af17-4240-ab5a-b6c149913841-kube-api-access-d6f79\") pod \"nova-scheduler-0\" (UID: \"e67e0dd7-af17-4240-ab5a-b6c149913841\") " pod="openstack/nova-scheduler-0" Feb 16 15:18:07 crc kubenswrapper[4705]: I0216 15:18:07.437791 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.060517 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.110568 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:41636->10.217.0.254:8775: read: connection reset by peer" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.110993 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": read tcp 10.217.0.2:41622->10.217.0.254:8775: read: connection reset by peer" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.442814 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dafc8c-fbe7-45cc-9558-fad23223b4d0" path="/var/lib/kubelet/pods/24dafc8c-fbe7-45cc-9558-fad23223b4d0/volumes" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.803859 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.884994 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886610 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.886919 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") pod \"628e6201-a994-4614-9b4d-3f261b718186\" (UID: \"628e6201-a994-4614-9b4d-3f261b718186\") " Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.888678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs" (OuterVolumeSpecName: "logs") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.908115 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl" (OuterVolumeSpecName: "kube-api-access-t2xzl") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "kube-api-access-t2xzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.956570 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data" (OuterVolumeSpecName: "config-data") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.967978 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993441 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993746 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/628e6201-a994-4614-9b4d-3f261b718186-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993856 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:08 crc kubenswrapper[4705]: I0216 15:18:08.993913 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xzl\" (UniqueName: \"kubernetes.io/projected/628e6201-a994-4614-9b4d-3f261b718186-kube-api-access-t2xzl\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.020577 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "628e6201-a994-4614-9b4d-3f261b718186" (UID: "628e6201-a994-4614-9b4d-3f261b718186"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072051 4705 generic.go:334] "Generic (PLEG): container finished" podID="628e6201-a994-4614-9b4d-3f261b718186" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" exitCode=0 Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072119 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"628e6201-a994-4614-9b4d-3f261b718186","Type":"ContainerDied","Data":"eeec271298f4dcb2eb43a0a1c49fcdad72fcc161271d50f3ad69a11322b20f9c"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072173 4705 scope.go:117] "RemoveContainer" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.072406 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.081761 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67e0dd7-af17-4240-ab5a-b6c149913841","Type":"ContainerStarted","Data":"eb759fb6b2e21021de42c5ef8b41c6a6ff316da783d3073f3e00a48b6ad7b382"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.081813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e67e0dd7-af17-4240-ab5a-b6c149913841","Type":"ContainerStarted","Data":"46142ee456045dcc700269dc212fada2e8ad6f9af585e9f2a3f2e6f01c476037"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.087580 4705 generic.go:334] "Generic (PLEG): container finished" podID="6977ac78-db27-460b-8a38-582c65dbb67b" containerID="5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" exitCode=0 Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.087629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3"} Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.098940 4705 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/628e6201-a994-4614-9b4d-3f261b718186-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.133592 4705 scope.go:117] "RemoveContainer" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.139068 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.13904697 podStartE2EDuration="2.13904697s" podCreationTimestamp="2026-02-16 15:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:09.117326358 +0000 UTC m=+1483.302303444" watchObservedRunningTime="2026-02-16 15:18:09.13904697 +0000 UTC m=+1483.324024046" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.164711 4705 scope.go:117] "RemoveContainer" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.165235 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": container with ID starting with 972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de not found: ID does not exist" containerID="972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165269 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de"} err="failed to get container status \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": rpc error: code = NotFound desc = could not find container \"972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de\": container with ID starting with 972ba39b29aca97a2a9baef3c137060e5be372f220647dc72eabe12e1e1400de not found: ID does not exist" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165291 4705 scope.go:117] "RemoveContainer" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.165697 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": container with ID starting with fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e not found: ID does not exist" containerID="fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.165723 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e"} err="failed to get container status \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": rpc error: code = NotFound desc = could not find container \"fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e\": container with ID starting with fdbff28a8d19b8724439cece744d377071084b39d81eb6e7c5bf4c54703e4d7e not found: ID does not exist" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.198963 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.241952 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.248893 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260019 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260778 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260800 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260840 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260849 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260863 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260869 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: E0216 15:18:09.260879 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.260885 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261137 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-metadata" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261158 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="628e6201-a994-4614-9b4d-3f261b718186" containerName="nova-metadata-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261181 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-log" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.261198 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" containerName="nova-api-api" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.262804 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.264885 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.264979 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.285538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308302 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308410 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308585 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308729 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308884 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.308924 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") pod \"6977ac78-db27-460b-8a38-582c65dbb67b\" (UID: \"6977ac78-db27-460b-8a38-582c65dbb67b\") " Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309270 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs" (OuterVolumeSpecName: "logs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309548 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309672 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309731 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309929 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.309992 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.310096 4705 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6977ac78-db27-460b-8a38-582c65dbb67b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.314707 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh" (OuterVolumeSpecName: "kube-api-access-f8lhh") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "kube-api-access-f8lhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.355400 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data" (OuterVolumeSpecName: "config-data") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.359959 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.413500 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.414629 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.414706 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.415308 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.416260 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e121221e-aecf-4425-bb78-e384ce98e73b-logs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.416877 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.417039 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419156 4705 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419516 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419611 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.419679 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lhh\" (UniqueName: \"kubernetes.io/projected/6977ac78-db27-460b-8a38-582c65dbb67b-kube-api-access-f8lhh\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.422262 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.425025 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.429041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e121221e-aecf-4425-bb78-e384ce98e73b-config-data\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.435041 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nglzb\" (UniqueName: \"kubernetes.io/projected/e121221e-aecf-4425-bb78-e384ce98e73b-kube-api-access-nglzb\") pod \"nova-metadata-0\" (UID: \"e121221e-aecf-4425-bb78-e384ce98e73b\") " pod="openstack/nova-metadata-0" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.449417 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6977ac78-db27-460b-8a38-582c65dbb67b" (UID: "6977ac78-db27-460b-8a38-582c65dbb67b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.522785 4705 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6977ac78-db27-460b-8a38-582c65dbb67b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:09 crc kubenswrapper[4705]: I0216 15:18:09.600880 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:18:10 crc kubenswrapper[4705]: W0216 15:18:10.092298 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode121221e_aecf_4425_bb78_e384ce98e73b.slice/crio-1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988 WatchSource:0}: Error finding container 1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988: Status 404 returned error can't find the container with id 1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988 Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.094174 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108246 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6977ac78-db27-460b-8a38-582c65dbb67b","Type":"ContainerDied","Data":"634fe2e8d4cd6243784fcfff154297107ca02631b3e1e85607f19102432197a6"} Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108298 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.108339 4705 scope.go:117] "RemoveContainer" containerID="5d7a13d088492edc71fd1b2aa6743627c7da96a6dac89ebeaa29f15b0b7af5d3" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.148580 4705 scope.go:117] "RemoveContainer" containerID="3bde9a67ab57030c4271d3ea4be45bc70a1a5b80a66d369c88a326afc671bbf4" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.154519 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.174130 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.193800 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.196547 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209190 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209310 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.209389 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.220965 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249849 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249922 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.249971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.250018 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.250083 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352534 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352786 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352874 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352922 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.352980 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.353040 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.358152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.358913 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.359072 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-config-data\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.361032 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f98b0f-bb45-4942-81e0-68e6f2658df5-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.361843 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3f98b0f-bb45-4942-81e0-68e6f2658df5-logs\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.380157 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4zk\" (UniqueName: \"kubernetes.io/projected/b3f98b0f-bb45-4942-81e0-68e6f2658df5-kube-api-access-dc4zk\") pod \"nova-api-0\" (UID: \"b3f98b0f-bb45-4942-81e0-68e6f2658df5\") " pod="openstack/nova-api-0" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.446143 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628e6201-a994-4614-9b4d-3f261b718186" path="/var/lib/kubelet/pods/628e6201-a994-4614-9b4d-3f261b718186/volumes" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.448884 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6977ac78-db27-460b-8a38-582c65dbb67b" path="/var/lib/kubelet/pods/6977ac78-db27-460b-8a38-582c65dbb67b/volumes" Feb 16 15:18:10 crc kubenswrapper[4705]: I0216 15:18:10.541544 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.013791 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:11 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:11 crc kubenswrapper[4705]: > Feb 16 15:18:11 crc kubenswrapper[4705]: W0216 15:18:11.122457 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3f98b0f_bb45_4942_81e0_68e6f2658df5.slice/crio-b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d WatchSource:0}: Error finding container b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d: Status 404 returned error can't find the container with id b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"80b053d99a4a239647c917dadc86268c20bab7e4733d84f70801e778283d19ee"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"605515fa5a1ce023877b35e7aca63570cea6d73ee46bdb734ebfe10778815ff4"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.124804 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.125164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e121221e-aecf-4425-bb78-e384ce98e73b","Type":"ContainerStarted","Data":"1d40aca17cbcad8fed7eac1369a77a69a9d791294a104f41f271b5d73e6ed988"} Feb 16 15:18:11 crc kubenswrapper[4705]: I0216 15:18:11.151831 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.151814763 podStartE2EDuration="2.151814763s" podCreationTimestamp="2026-02-16 15:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:11.14106412 +0000 UTC m=+1485.326041196" watchObservedRunningTime="2026-02-16 15:18:11.151814763 +0000 UTC m=+1485.336791839" Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.147286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"175f9fe5c00efdc0e273ab22128eec8a1538b8d92d019a733175abba7df05320"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.148103 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"e42615b7c1c1e4d0238110dcaeb523081af56ef70f114d18a4a80c8f964f6b6b"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.148125 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3f98b0f-bb45-4942-81e0-68e6f2658df5","Type":"ContainerStarted","Data":"b55af5f4a6ddf7788f060d1a98e7c5c9dbbfc2ec1466052074b24546e2da6f8d"} Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.174245 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.174222094 podStartE2EDuration="2.174222094s" podCreationTimestamp="2026-02-16 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:18:12.167492954 +0000 UTC m=+1486.352470030" watchObservedRunningTime="2026-02-16 15:18:12.174222094 +0000 UTC m=+1486.359199170" Feb 16 15:18:12 crc kubenswrapper[4705]: I0216 15:18:12.438547 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:18:14 crc kubenswrapper[4705]: I0216 15:18:14.601232 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:18:14 crc kubenswrapper[4705]: I0216 15:18:14.602064 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:18:17 crc kubenswrapper[4705]: I0216 15:18:17.437975 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:18:17 crc kubenswrapper[4705]: I0216 15:18:17.488719 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:18:18 crc kubenswrapper[4705]: I0216 15:18:18.267663 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:18:19 crc kubenswrapper[4705]: I0216 15:18:19.601535 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:18:19 crc kubenswrapper[4705]: I0216 15:18:19.603243 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.542135 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.542599 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.621521 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e121221e-aecf-4425-bb78-e384ce98e73b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:20 crc kubenswrapper[4705]: I0216 15:18:20.621559 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e121221e-aecf-4425-bb78-e384ce98e73b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.016248 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:18:21 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:18:21 crc kubenswrapper[4705]: > Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.554568 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3f98b0f-bb45-4942-81e0-68e6f2658df5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:21 crc kubenswrapper[4705]: I0216 15:18:21.554622 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3f98b0f-bb45-4942-81e0-68e6f2658df5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:18:22 crc kubenswrapper[4705]: I0216 15:18:22.015800 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.244567 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.245524 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" containerID="cri-o://24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" gracePeriod=30 Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.320079 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.320359 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" containerID="cri-o://2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" gracePeriod=30 Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.880792 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.990966 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:26 crc kubenswrapper[4705]: I0216 15:18:26.991913 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") pod \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\" (UID: \"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.006673 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl" (OuterVolumeSpecName: "kube-api-access-mdfvl") pod "bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" (UID: "bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0"). InnerVolumeSpecName "kube-api-access-mdfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.094532 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.094918 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.095024 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") pod \"683ef288-8b6e-4612-be52-d1654bd75098\" (UID: \"683ef288-8b6e-4612-be52-d1654bd75098\") " Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.095987 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdfvl\" (UniqueName: \"kubernetes.io/projected/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0-kube-api-access-mdfvl\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.099295 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l" (OuterVolumeSpecName: "kube-api-access-bxz7l") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "kube-api-access-bxz7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.131675 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.161942 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data" (OuterVolumeSpecName: "config-data") pod "683ef288-8b6e-4612-be52-d1654bd75098" (UID: "683ef288-8b6e-4612-be52-d1654bd75098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198512 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxz7l\" (UniqueName: \"kubernetes.io/projected/683ef288-8b6e-4612-be52-d1654bd75098-kube-api-access-bxz7l\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198561 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.198571 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683ef288-8b6e-4612-be52-d1654bd75098-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359675 4705 generic.go:334] "Generic (PLEG): container finished" podID="683ef288-8b6e-4612-be52-d1654bd75098" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" exitCode=2 Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359741 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerDied","Data":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359776 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"683ef288-8b6e-4612-be52-d1654bd75098","Type":"ContainerDied","Data":"3c16a853ff0683de7e65e4c7c2c283c0bc34b6c75fda5fb9261d347018293d69"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359797 4705 scope.go:117] "RemoveContainer" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.359934 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369094 4705 generic.go:334] "Generic (PLEG): container finished" podID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" exitCode=2 Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369148 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerDied","Data":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369177 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.369202 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0","Type":"ContainerDied","Data":"75cb532fcced0ca2257b46e26b2cad547a6e03dd08f6c3f879a11562ab1a0955"} Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.431433 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.433602 4705 scope.go:117] "RemoveContainer" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.433990 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": container with ID starting with 2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8 not found: ID does not exist" containerID="2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.434032 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8"} err="failed to get container status \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": rpc error: code = NotFound desc = could not find container \"2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8\": container with ID starting with 2de495d86f0947a4bbcd49274f85a097907d3f03f74448653262353ca8a0b1d8 not found: ID does not exist" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.434059 4705 scope.go:117] "RemoveContainer" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.449347 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.467265 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.474797 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.474835 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.474884 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.474893 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.475529 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" containerName="kube-state-metrics" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.475570 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="683ef288-8b6e-4612-be52-d1654bd75098" containerName="mysqld-exporter" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.477485 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.484209 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.486805 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.487023 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.501267 4705 scope.go:117] "RemoveContainer" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507211 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507330 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507406 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.507982 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: E0216 15:18:27.511107 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": container with ID starting with 24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95 not found: ID does not exist" containerID="24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.511140 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95"} err="failed to get container status \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": rpc error: code = NotFound desc = could not find container \"24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95\": container with ID starting with 24619c0a01c14d772beb952f170c2c7e3fd879f8952017573346d3c022859e95 not found: ID does not exist" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.515926 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.540203 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.562184 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.566258 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.570068 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.570539 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.596182 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612501 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612601 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612639 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612726 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612754 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612819 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612844 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.612898 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.620574 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.639458 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-config-data\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.639624 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.644264 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcdps\" (UniqueName: \"kubernetes.io/projected/d40e4f3a-57bb-45e6-997b-39ffc0e497d9-kube-api-access-kcdps\") pod \"mysqld-exporter-0\" (UID: \"d40e4f3a-57bb-45e6-997b-39ffc0e497d9\") " pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715199 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715285 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715406 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.715454 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.719522 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.719752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.721457 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.732322 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fp6\" (UniqueName: \"kubernetes.io/projected/db5e423c-e590-4e7b-913a-a0a10d55537d-kube-api-access-69fp6\") pod \"kube-state-metrics-0\" (UID: \"db5e423c-e590-4e7b-913a-a0a10d55537d\") " pod="openstack/kube-state-metrics-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.811010 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 16 15:18:27 crc kubenswrapper[4705]: I0216 15:18:27.886499 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.435416 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683ef288-8b6e-4612-be52-d1654bd75098" path="/var/lib/kubelet/pods/683ef288-8b6e-4612-be52-d1654bd75098/volumes" Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.436470 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0" path="/var/lib/kubelet/pods/bc2fcf9e-1bc7-4b0c-aa83-b4d5daafbcf0/volumes" Feb 16 15:18:28 crc kubenswrapper[4705]: W0216 15:18:28.586908 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40e4f3a_57bb_45e6_997b_39ffc0e497d9.slice/crio-96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f WatchSource:0}: Error finding container 96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f: Status 404 returned error can't find the container with id 96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.589729 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 16 15:18:28 crc kubenswrapper[4705]: I0216 15:18:28.710275 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.192505 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.193162 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" containerID="cri-o://4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194065 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" containerID="cri-o://a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194234 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" containerID="cri-o://88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.194356 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" containerID="cri-o://d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" gracePeriod=30 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.407068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"db5e423c-e590-4e7b-913a-a0a10d55537d","Type":"ContainerStarted","Data":"f62cd2996483c851bc4686ba09550c79108adba85fa2dd0a75b2ef05f42146f5"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.412908 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d40e4f3a-57bb-45e6-997b-39ffc0e497d9","Type":"ContainerStarted","Data":"96634e34e34dd4afb1e3ed4a3b26c076a90146ae17bfd2de53b239c80152a26f"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416531 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" exitCode=0 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416575 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" exitCode=2 Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416597 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.416614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0"} Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.606698 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.611109 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:18:29 crc kubenswrapper[4705]: I0216 15:18:29.618359 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.017095 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.074368 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.258147 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.434551 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" exitCode=0 Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.435223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d40e4f3a-57bb-45e6-997b-39ffc0e497d9","Type":"ContainerStarted","Data":"8fc5963eedb43a94bdfaff01f3a3d86e1c39c0b2e61c081e17a443fb532d6277"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.435268 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.437184 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"db5e423c-e590-4e7b-913a-a0a10d55537d","Type":"ContainerStarted","Data":"150b3e6bd321ebd2a450168fa1d037631c9949d3b48fc77d7d9938a205d6fdaa"} Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.444661 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.450960 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.927973685 podStartE2EDuration="3.45093896s" podCreationTimestamp="2026-02-16 15:18:27 +0000 UTC" firstStartedPulling="2026-02-16 15:18:28.590623933 +0000 UTC m=+1502.775600999" lastFinishedPulling="2026-02-16 15:18:29.113589198 +0000 UTC m=+1503.298566274" observedRunningTime="2026-02-16 15:18:30.448591533 +0000 UTC m=+1504.633568619" watchObservedRunningTime="2026-02-16 15:18:30.45093896 +0000 UTC m=+1504.635916036" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.540431 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.074858676 podStartE2EDuration="3.540403794s" podCreationTimestamp="2026-02-16 15:18:27 +0000 UTC" firstStartedPulling="2026-02-16 15:18:28.712629979 +0000 UTC m=+1502.897607055" lastFinishedPulling="2026-02-16 15:18:29.178175097 +0000 UTC m=+1503.363152173" observedRunningTime="2026-02-16 15:18:30.523795794 +0000 UTC m=+1504.708772870" watchObservedRunningTime="2026-02-16 15:18:30.540403794 +0000 UTC m=+1504.725380880" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.553315 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.554955 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.562220 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:18:30 crc kubenswrapper[4705]: I0216 15:18:30.568498 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.448560 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.448631 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.449090 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jtvt" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" containerID="cri-o://cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" gracePeriod=2 Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.457657 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.683869 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:18:31 crc kubenswrapper[4705]: I0216 15:18:31.684260 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.064047 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.214611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.214958 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.215156 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") pod \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\" (UID: \"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3\") " Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.216440 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities" (OuterVolumeSpecName: "utilities") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.222845 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t" (OuterVolumeSpecName: "kube-api-access-fmf2t") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "kube-api-access-fmf2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.319328 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmf2t\" (UniqueName: \"kubernetes.io/projected/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-kube-api-access-fmf2t\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.319377 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.350082 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" (UID: "ccdf8a61-b523-496c-bf8d-4b8a12aba9d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.421573 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465344 4705 generic.go:334] "Generic (PLEG): container finished" podID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" exitCode=0 Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465465 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jtvt" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465477 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jtvt" event={"ID":"ccdf8a61-b523-496c-bf8d-4b8a12aba9d3","Type":"ContainerDied","Data":"e1adb33222027cc4f090326df3b9dd77bb0143da9f839682a1a04a68a2f7c1af"} Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.465576 4705 scope.go:117] "RemoveContainer" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.499814 4705 scope.go:117] "RemoveContainer" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.508806 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.522821 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jtvt"] Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.524635 4705 scope.go:117] "RemoveContainer" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.591645 4705 scope.go:117] "RemoveContainer" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.592398 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": container with ID starting with cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a not found: ID does not exist" containerID="cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592448 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a"} err="failed to get container status \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": rpc error: code = NotFound desc = could not find container \"cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a\": container with ID starting with cf679f08e3de0d7a3c610788d8fdaf278e091bd1252acc77d2f9876243cf8b0a not found: ID does not exist" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592478 4705 scope.go:117] "RemoveContainer" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.592877 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": container with ID starting with b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda not found: ID does not exist" containerID="b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592938 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda"} err="failed to get container status \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": rpc error: code = NotFound desc = could not find container \"b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda\": container with ID starting with b33911ea6e5806f2e96634fe89a6b39ac4e67895ff5495945f3efda0b08bafda not found: ID does not exist" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.592987 4705 scope.go:117] "RemoveContainer" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: E0216 15:18:32.593363 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": container with ID starting with c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4 not found: ID does not exist" containerID="c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4" Feb 16 15:18:32 crc kubenswrapper[4705]: I0216 15:18:32.593406 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4"} err="failed to get container status \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": rpc error: code = NotFound desc = could not find container \"c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4\": container with ID starting with c6c035c3db55781dd2ee976b08d5e81ee9c9027623bbd7ded3af32c9652f43e4 not found: ID does not exist" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.500681 4705 generic.go:334] "Generic (PLEG): container finished" podID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerID="d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" exitCode=0 Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.501224 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc"} Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.720757 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.868949 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869009 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869137 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869232 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869330 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.869426 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") pod \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\" (UID: \"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc\") " Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.870489 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.870803 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.876968 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts" (OuterVolumeSpecName: "scripts") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.899188 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt" (OuterVolumeSpecName: "kube-api-access-lb6vt") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "kube-api-access-lb6vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.922627 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.974158 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.975420 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.975586 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb6vt\" (UniqueName: \"kubernetes.io/projected/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-kube-api-access-lb6vt\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.976221 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.976517 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:33 crc kubenswrapper[4705]: I0216 15:18:33.981014 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.040451 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data" (OuterVolumeSpecName: "config-data") pod "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" (UID: "df7be89b-cd9f-45f0-b3e9-5f50def9cfcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.079683 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.079716 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.463773 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" path="/var/lib/kubelet/pods/ccdf8a61-b523-496c-bf8d-4b8a12aba9d3/volumes" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.531749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df7be89b-cd9f-45f0-b3e9-5f50def9cfcc","Type":"ContainerDied","Data":"aeb06779efac7c38585b17cfd3ae6968f2916d9ee186859b6bf4a5e6711bb96e"} Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.531838 4705 scope.go:117] "RemoveContainer" containerID="a859c4b9f758299d32b0ccf712f644565042666c7cc455c79b0ea695949b6fba" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.532071 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.577592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.599279 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.622691 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623321 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-content" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623339 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-content" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623352 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623365 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623421 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623438 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623483 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623492 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623512 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-utilities" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623521 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="extract-utilities" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623540 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623549 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: E0216 15:18:34.623575 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623583 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623870 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="proxy-httpd" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623890 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="sg-core" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623906 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-notification-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623922 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" containerName="ceilometer-central-agent" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.623939 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdf8a61-b523-496c-bf8d-4b8a12aba9d3" containerName="registry-server" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.627402 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632031 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632711 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.632891 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.665473 4705 scope.go:117] "RemoveContainer" containerID="88cd6449b748bacd36f332a9b785f554dec689cf38c284b66c63db5389cadfe0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.684450 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.693447 4705 scope.go:117] "RemoveContainer" containerID="d3719f70dd43cd660f597910c5ac6ae7a802a77b579e0b9486b99cd05fa097dc" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700160 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700417 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700649 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.700921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701022 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701153 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.701318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.725965 4705 scope.go:117] "RemoveContainer" containerID="4072ef38a4c487ee391e17074b51b5326fb665d3e3b590d852c735f83bad4281" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804479 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804698 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804778 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804843 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804898 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.804968 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.806752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.806981 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.811821 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.813270 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.814673 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.816238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.819168 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.833752 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"ceilometer-0\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " pod="openstack/ceilometer-0" Feb 16 15:18:34 crc kubenswrapper[4705]: I0216 15:18:34.955253 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.511930 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.530273 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:18:35 crc kubenswrapper[4705]: I0216 15:18:35.553182 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"b14abde630b2bee0d5eb1b3685bd917b7fcb2ae39f9d9939adcb84271012d464"} Feb 16 15:18:36 crc kubenswrapper[4705]: I0216 15:18:36.445017 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7be89b-cd9f-45f0-b3e9-5f50def9cfcc" path="/var/lib/kubelet/pods/df7be89b-cd9f-45f0-b3e9-5f50def9cfcc/volumes" Feb 16 15:18:36 crc kubenswrapper[4705]: I0216 15:18:36.580328 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53"} Feb 16 15:18:37 crc kubenswrapper[4705]: I0216 15:18:37.620655 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45"} Feb 16 15:18:37 crc kubenswrapper[4705]: I0216 15:18:37.895709 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:18:39 crc kubenswrapper[4705]: I0216 15:18:39.655782 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9"} Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.676057 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerStarted","Data":"641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f"} Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.676485 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:18:40 crc kubenswrapper[4705]: I0216 15:18:40.710572 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.23204635 podStartE2EDuration="6.710491782s" podCreationTimestamp="2026-02-16 15:18:34 +0000 UTC" firstStartedPulling="2026-02-16 15:18:35.529710936 +0000 UTC m=+1509.714688052" lastFinishedPulling="2026-02-16 15:18:40.008156398 +0000 UTC m=+1514.193133484" observedRunningTime="2026-02-16 15:18:40.699079968 +0000 UTC m=+1514.884057044" watchObservedRunningTime="2026-02-16 15:18:40.710491782 +0000 UTC m=+1514.895468858" Feb 16 15:19:01 crc kubenswrapper[4705]: I0216 15:19:01.683854 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:01 crc kubenswrapper[4705]: I0216 15:19:01.684493 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:04 crc kubenswrapper[4705]: I0216 15:19:04.969123 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.668934 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.775674 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-nz52p"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.831767 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.837826 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.886217 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960573 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960630 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:16 crc kubenswrapper[4705]: I0216 15:19:16.960745 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.063899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.066015 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.066336 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.071308 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-combined-ca-bundle\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.072400 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e6dd23-2e83-460f-b42f-885bf7af0214-config-data\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.083969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdl5d\" (UniqueName: \"kubernetes.io/projected/09e6dd23-2e83-460f-b42f-885bf7af0214-kube-api-access-tdl5d\") pod \"heat-db-sync-d9lbf\" (UID: \"09e6dd23-2e83-460f-b42f-885bf7af0214\") " pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.182269 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-d9lbf" Feb 16 15:19:17 crc kubenswrapper[4705]: I0216 15:19:17.794070 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-d9lbf"] Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976264 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976696 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.976882 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:17 crc kubenswrapper[4705]: E0216 15:19:17.978123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.200183 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-d9lbf" event={"ID":"09e6dd23-2e83-460f-b42f-885bf7af0214","Type":"ContainerStarted","Data":"418278f1cc47aacacb7fcac2908486e492493310ac4701393b7de2a51d8dc824"} Feb 16 15:19:18 crc kubenswrapper[4705]: E0216 15:19:18.202902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.224614 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:18 crc kubenswrapper[4705]: I0216 15:19:18.433818 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72538f80-8a9f-451f-9653-4f1faeec593c" path="/var/lib/kubelet/pods/72538f80-8a9f-451f-9653-4f1faeec593c/volumes" Feb 16 15:19:19 crc kubenswrapper[4705]: E0216 15:19:19.214924 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.290962 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.423491 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424031 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" containerID="cri-o://7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424064 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" containerID="cri-o://91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.424106 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" containerID="cri-o://641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" gracePeriod=30 Feb 16 15:19:19 crc kubenswrapper[4705]: I0216 15:19:19.425286 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" containerID="cri-o://a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" gracePeriod=30 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245155 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245206 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" exitCode=2 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245218 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245227 4705 generic.go:334] "Generic (PLEG): container finished" podID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerID="a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" exitCode=0 Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245258 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245299 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45"} Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.245326 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53"} Feb 16 15:19:20 crc kubenswrapper[4705]: E0216 15:19:20.384778 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9c5323_a947_4c1b_ac75_ae64fd17a7a8.slice/crio-conmon-a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9c5323_a947_4c1b_ac75_ae64fd17a7a8.slice/crio-a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.781897 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871511 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871662 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871798 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871859 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.871904 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872154 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872183 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") pod \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\" (UID: \"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8\") " Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.872820 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.873537 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.874028 4705 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.874055 4705 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.883730 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g" (OuterVolumeSpecName: "kube-api-access-zp98g") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "kube-api-access-zp98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.898924 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts" (OuterVolumeSpecName: "scripts") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.970842 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978271 4705 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978307 4705 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.978337 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp98g\" (UniqueName: \"kubernetes.io/projected/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-kube-api-access-zp98g\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:20 crc kubenswrapper[4705]: I0216 15:19:20.996207 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.056782 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.080876 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.080913 4705 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.140643 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data" (OuterVolumeSpecName: "config-data") pod "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" (UID: "9c9c5323-a947-4c1b-ac75-ae64fd17a7a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.184524 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.258799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c9c5323-a947-4c1b-ac75-ae64fd17a7a8","Type":"ContainerDied","Data":"b14abde630b2bee0d5eb1b3685bd917b7fcb2ae39f9d9939adcb84271012d464"} Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.258890 4705 scope.go:117] "RemoveContainer" containerID="641efa60bfe91a134040872171cd5dc36af1a54a2fde0519e82073f0282da31f" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.259156 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.307454 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.322675 4705 scope.go:117] "RemoveContainer" containerID="7519732cba0c1f76a21cb14cdae25f64ef4456a2f380bc80aa084048307f5fc9" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.329424 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.343773 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344599 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344621 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344640 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344646 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344681 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344688 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: E0216 15:19:21.344695 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344703 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344950 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-notification-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344967 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="sg-core" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344981 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="proxy-httpd" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.344992 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" containerName="ceilometer-central-agent" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.347414 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.351318 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.351427 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.352617 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.356420 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.370743 4705 scope.go:117] "RemoveContainer" containerID="91aeb8594453ff9bcd51e5fe0ab599a752c34f01daa891375ee555aec1791e45" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392742 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392868 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392897 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392934 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392970 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.392996 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.393032 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.393053 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.424510 4705 scope.go:117] "RemoveContainer" containerID="a971d615655e46c8278bc28a10866bb13fe7d9f8ded86e16bc6e73eb4a334f53" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496687 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496832 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496881 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496915 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.496997 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.497022 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.497198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.500127 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-log-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.501005 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0eefb1ac-9933-45ff-a3de-de6a375bef45-run-httpd\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.505530 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.506048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.506806 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-config-data\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.507330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.507506 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eefb1ac-9933-45ff-a3de-de6a375bef45-scripts\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.520299 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf945\" (UniqueName: \"kubernetes.io/projected/0eefb1ac-9933-45ff-a3de-de6a375bef45-kube-api-access-xf945\") pod \"ceilometer-0\" (UID: \"0eefb1ac-9933-45ff-a3de-de6a375bef45\") " pod="openstack/ceilometer-0" Feb 16 15:19:21 crc kubenswrapper[4705]: I0216 15:19:21.673663 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.247220 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.287088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"e38ea5175f250f4c1e5be4639893d0d75a4d0e0b967d1621c26438a4d0f3cb21"} Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354200 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354289 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:22 crc kubenswrapper[4705]: E0216 15:19:22.354545 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:22 crc kubenswrapper[4705]: I0216 15:19:22.435600 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9c5323-a947-4c1b-ac75-ae64fd17a7a8" path="/var/lib/kubelet/pods/9c9c5323-a947-4c1b-ac75-ae64fd17a7a8/volumes" Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.300569 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"e7aa3da3d6c30bd5a32a8afa1f687a1d814d7de856ca8413e867c53f3f8d407f"} Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.568932 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" containerID="cri-o://a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" gracePeriod=604795 Feb 16 15:19:23 crc kubenswrapper[4705]: I0216 15:19:23.734997 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 16 15:19:24 crc kubenswrapper[4705]: I0216 15:19:24.316071 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"74c6e75428ac9c8870fd387cf77f7813e11fdba438b2629b37ad9589d37dca29"} Feb 16 15:19:24 crc kubenswrapper[4705]: I0216 15:19:24.925669 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" containerID="cri-o://9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" gracePeriod=604795 Feb 16 15:19:25 crc kubenswrapper[4705]: E0216 15:19:25.904421 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:26 crc kubenswrapper[4705]: I0216 15:19:26.353808 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0eefb1ac-9933-45ff-a3de-de6a375bef45","Type":"ContainerStarted","Data":"a7d39367f686cd15b7f8f95563076f8b9c94da472429a2bca19c7cb952502e12"} Feb 16 15:19:26 crc kubenswrapper[4705]: I0216 15:19:26.354355 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:19:26 crc kubenswrapper[4705]: E0216 15:19:26.357436 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:27 crc kubenswrapper[4705]: E0216 15:19:27.370248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:29 crc kubenswrapper[4705]: I0216 15:19:29.508882 4705 scope.go:117] "RemoveContainer" containerID="6b13db9b9dc4dcec392ffa4e74f00a9ee43871effc42f68cb3ed77e75924c36e" Feb 16 15:19:29 crc kubenswrapper[4705]: I0216 15:19:29.570045 4705 scope.go:117] "RemoveContainer" containerID="02261dd51fff83f1f769426874aaf3ab8c54221acecfe72a2bd0b7b7e293e788" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.387999 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402602 4705 generic.go:334] "Generic (PLEG): container finished" podID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" exitCode=0 Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402671 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402716 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f6b410b5-951c-43d2-b846-3fef02ec0f7f","Type":"ContainerDied","Data":"ba74fdfcb7efec48976e7232011d375059db8616337cd4b51be00bbb131415c9"} Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402742 4705 scope.go:117] "RemoveContainer" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.402746 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.467087 4705 scope.go:117] "RemoveContainer" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.597147 4705 scope.go:117] "RemoveContainer" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.599280 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": container with ID starting with a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641 not found: ID does not exist" containerID="a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.599344 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641"} err="failed to get container status \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": rpc error: code = NotFound desc = could not find container \"a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641\": container with ID starting with a62d1a0b06b59b06e0677c0f1c4bf8d343e3832fb2f8bd7fff79a9dc34547641 not found: ID does not exist" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.599393 4705 scope.go:117] "RemoveContainer" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.599968 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": container with ID starting with 3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523 not found: ID does not exist" containerID="3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.600019 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523"} err="failed to get container status \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": rpc error: code = NotFound desc = could not find container \"3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523\": container with ID starting with 3e6af4e309f1fea93273c336e19d6d788b901062821b10490a1957309f5b5523 not found: ID does not exist" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.623711 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.624735 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.624969 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625007 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625034 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.625882 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629310 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629388 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629454 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629528 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629583 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.629781 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.631321 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.633354 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.633516 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.638553 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.640029 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info" (OuterVolumeSpecName: "pod-info") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.647931 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.648282 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.651475 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb" (OuterVolumeSpecName: "kube-api-access-vrknb") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "kube-api-access-vrknb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.671911 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data" (OuterVolumeSpecName: "config-data") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.677990 4705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e podName:f6b410b5-951c-43d2-b846-3fef02ec0f7f nodeName:}" failed. No retries permitted until 2026-02-16 15:19:31.177958837 +0000 UTC m=+1565.362935913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685401 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685497 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.685732 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:30 crc kubenswrapper[4705]: E0216 15:19:30.686872 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.733382 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf" (OuterVolumeSpecName: "server-conf") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736401 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736425 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736438 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736448 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f6b410b5-951c-43d2-b846-3fef02ec0f7f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736457 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6b410b5-951c-43d2-b846-3fef02ec0f7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736467 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f6b410b5-951c-43d2-b846-3fef02ec0f7f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.736477 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrknb\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-kube-api-access-vrknb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.809657 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:30 crc kubenswrapper[4705]: I0216 15:19:30.840179 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f6b410b5-951c-43d2-b846-3fef02ec0f7f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.251836 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\" (UID: \"f6b410b5-951c-43d2-b846-3fef02ec0f7f\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.281560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e" (OuterVolumeSpecName: "persistence") pod "f6b410b5-951c-43d2-b846-3fef02ec0f7f" (UID: "f6b410b5-951c-43d2-b846-3fef02ec0f7f"). InnerVolumeSpecName "pvc-49db22ca-5365-4dcc-af52-2ea57a09051e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.355787 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") on node \"crc\" " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.356246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.379051 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.411489 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.412738 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.412781 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.412878 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.412888 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.413891 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.416008 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.438977 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.450263 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.450502 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49db22ca-5365-4dcc-af52-2ea57a09051e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e") on node "crc" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.456874 4705 generic.go:334] "Generic (PLEG): container finished" podID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerID="9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" exitCode=0 Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.456946 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f"} Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.460170 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564333 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564525 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564575 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.564590 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565155 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565235 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565262 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565285 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565318 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565340 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.565390 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669409 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669475 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669497 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669580 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669632 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669654 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669677 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669720 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669749 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.669784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.670357 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.671131 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-server-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.671645 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.673183 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.676621 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3671c78-83d9-45b6-a869-d08abfa12906-config-data\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678452 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3671c78-83d9-45b6-a869-d08abfa12906-pod-info\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678477 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3671c78-83d9-45b6-a869-d08abfa12906-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.678676 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.679140 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e04bcb153e3e04f037e1fc841d6f137a96f2052e5c7d3319ec9bf09db685a60/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.679587 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684424 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684493 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.684541 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685188 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685761 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.685824 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" gracePeriod=600 Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.700187 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pmzk\" (UniqueName: \"kubernetes.io/projected/f3671c78-83d9-45b6-a869-d08abfa12906-kube-api-access-8pmzk\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.752781 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49db22ca-5365-4dcc-af52-2ea57a09051e\") pod \"rabbitmq-server-2\" (UID: \"f3671c78-83d9-45b6-a869-d08abfa12906\") " pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.775769 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.839324 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.850880 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.869290 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.870121 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870139 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: E0216 15:19:31.870156 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870163 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="setup-container" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.870393 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" containerName="rabbitmq" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.873734 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.886889 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.937128 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.987758 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.988277 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.988379 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992474 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992528 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992572 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992610 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992652 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992671 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992749 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.992811 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") pod \"070373d6-b0bd-43e2-bdf5-ca300875e65d\" (UID: \"070373d6-b0bd-43e2-bdf5-ca300875e65d\") " Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993326 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993384 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993428 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993569 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.993614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.996176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:31 crc kubenswrapper[4705]: I0216 15:19:31.996740 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.000827 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.002534 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.009642 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp" (OuterVolumeSpecName: "kube-api-access-gfwxp") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "kube-api-access-gfwxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.018493 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.018567 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info" (OuterVolumeSpecName: "pod-info") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.041533 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a" (OuterVolumeSpecName: "persistence") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.078941 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data" (OuterVolumeSpecName: "config-data") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098131 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.098332 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.099048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100589 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100750 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100808 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.100991 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101234 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101721 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/070373d6-b0bd-43e2-bdf5-ca300875e65d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101810 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101872 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101928 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.101986 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/070373d6-b0bd-43e2-bdf5-ca300875e65d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102047 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102103 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwxp\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-kube-api-access-gfwxp\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102186 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") on node \"crc\" " Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102251 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.102926 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.107756 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.107918 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.110796 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.125048 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"dnsmasq-dns-7d84b4d45c-hggbw\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.131421 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf" (OuterVolumeSpecName: "server-conf") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.208196 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/070373d6-b0bd-43e2-bdf5-ca300875e65d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.217275 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "070373d6-b0bd-43e2-bdf5-ca300875e65d" (UID: "070373d6-b0bd-43e2-bdf5-ca300875e65d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.218823 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.219021 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a") on node "crc" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.241883 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.317610 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/070373d6-b0bd-43e2-bdf5-ca300875e65d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.317670 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.438447 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b410b5-951c-43d2-b846-3fef02ec0f7f" path="/var/lib/kubelet/pods/f6b410b5-951c-43d2-b846-3fef02ec0f7f/volumes" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490042 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490254 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"070373d6-b0bd-43e2-bdf5-ca300875e65d","Type":"ContainerDied","Data":"9536c4826f2994651344a9956c3c00d2cb404777160d90908e2937cd52e8fb5f"} Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.490319 4705 scope.go:117] "RemoveContainer" containerID="9f6994c40bbdc294c2e47b9d750eb837f2ca96e2252dda9f1acab79e978bee8f" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.496284 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" exitCode=0 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.496335 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29"} Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.497535 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:19:32 crc kubenswrapper[4705]: E0216 15:19:32.497896 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.572127 4705 scope.go:117] "RemoveContainer" containerID="663ebd3ccb0d52cf06babb260d76ccd359a0593b49138f63e6178bfe5bfd914d" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.572730 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.615344 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: W0216 15:19:32.634710 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3671c78_83d9_45b6_a869_d08abfa12906.slice/crio-96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6 WatchSource:0}: Error finding container 96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6: Status 404 returned error can't find the container with id 96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.638296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.660525 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.664275 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.667095 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668440 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668610 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jzl8w" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.668866 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669143 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669311 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.669764 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739350 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739432 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739475 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739549 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739614 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739663 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739693 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.739713 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.753791 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.792952 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.801639 4705 scope.go:117] "RemoveContainer" containerID="99f3f757d43d2fd38017e0cb3e452f132236200b0a90db50ba2e30cfa5620a38" Feb 16 15:19:32 crc kubenswrapper[4705]: W0216 15:19:32.809154 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8684c18_9b3b_468c_b055_c6bbc838aba7.slice/crio-119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601 WatchSource:0}: Error finding container 119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601: Status 404 returned error can't find the container with id 119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601 Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.842804 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843197 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843238 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843282 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843306 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843331 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843351 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843386 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843575 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.843600 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.844969 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.846762 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.846815 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.847577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.848485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35504e73-1115-4e30-8ef7-95e85f31eaf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858279 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858322 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15fddb9283d0361ec376f6d3697b3a7dae141e971c813fd76f875f1c98aad2dc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.858922 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.866608 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35504e73-1115-4e30-8ef7-95e85f31eaf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.868698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.870495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnvt\" (UniqueName: \"kubernetes.io/projected/35504e73-1115-4e30-8ef7-95e85f31eaf6-kube-api-access-8gnvt\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.875035 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35504e73-1115-4e30-8ef7-95e85f31eaf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:32 crc kubenswrapper[4705]: I0216 15:19:32.915524 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b367f06b-bdb1-417f-9354-5cf7e70b520a\") pod \"rabbitmq-cell1-server-0\" (UID: \"35504e73-1115-4e30-8ef7-95e85f31eaf6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.164201 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.517598 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"96966772efe91e39ec17d0e663e4cc95dd42475501b39c108948b1d90bb5cec6"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522063 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" exitCode=0 Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522123 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.522157 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerStarted","Data":"119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601"} Feb 16 15:19:33 crc kubenswrapper[4705]: I0216 15:19:33.694604 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:19:33 crc kubenswrapper[4705]: W0216 15:19:33.707988 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35504e73_1115_4e30_8ef7_95e85f31eaf6.slice/crio-78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a WatchSource:0}: Error finding container 78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a: Status 404 returned error can't find the container with id 78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.436190 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070373d6-b0bd-43e2-bdf5-ca300875e65d" path="/var/lib/kubelet/pods/070373d6-b0bd-43e2-bdf5-ca300875e65d/volumes" Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.538021 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerStarted","Data":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.538364 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.540643 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"78c2d5a0aca9862122f4795e9053c0cadcd9463584a5917f7da720916fb56c9a"} Feb 16 15:19:34 crc kubenswrapper[4705]: I0216 15:19:34.566962 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" podStartSLOduration=3.56694215 podStartE2EDuration="3.56694215s" podCreationTimestamp="2026-02-16 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:19:34.559610702 +0000 UTC m=+1568.744587778" watchObservedRunningTime="2026-02-16 15:19:34.56694215 +0000 UTC m=+1568.751919226" Feb 16 15:19:35 crc kubenswrapper[4705]: I0216 15:19:35.558360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac"} Feb 16 15:19:36 crc kubenswrapper[4705]: I0216 15:19:36.577799 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea"} Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.549641 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.554584 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.564526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661399 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.661542 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764013 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764142 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.764675 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.765109 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.791212 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"redhat-marketplace-vkkhj\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:39 crc kubenswrapper[4705]: I0216 15:19:39.886049 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:40 crc kubenswrapper[4705]: I0216 15:19:40.441538 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:40 crc kubenswrapper[4705]: W0216 15:19:40.449102 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf747da3_b6aa_42f8_8339_fd2189d24bd0.slice/crio-9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3 WatchSource:0}: Error finding container 9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3: Status 404 returned error can't find the container with id 9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3 Feb 16 15:19:40 crc kubenswrapper[4705]: I0216 15:19:40.666314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerStarted","Data":"9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3"} Feb 16 15:19:41 crc kubenswrapper[4705]: I0216 15:19:41.681308 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" exitCode=0 Feb 16 15:19:41 crc kubenswrapper[4705]: I0216 15:19:41.681385 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26"} Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.275563 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.358800 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.359066 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" containerID="cri-o://44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" gracePeriod=10 Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.459587 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568386 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568455 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.568622 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.578071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.628667 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.641706 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.641822 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.706523 4705 generic.go:334] "Generic (PLEG): container finished" podID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerID="44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" exitCode=0 Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.707861 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7"} Feb 16 15:19:42 crc kubenswrapper[4705]: E0216 15:19:42.710105 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.799863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800142 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800188 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800345 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800703 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.800830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903377 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903424 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903474 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903532 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903592 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.903612 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.904744 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.905281 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-config\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.905792 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.907249 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.907598 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.908093 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414f383c-09a6-4895-81cc-e12f73391831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.931319 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pch6v\" (UniqueName: \"kubernetes.io/projected/414f383c-09a6-4895-81cc-e12f73391831-kube-api-access-pch6v\") pod \"dnsmasq-dns-6f6df4f56c-l9dk8\" (UID: \"414f383c-09a6-4895-81cc-e12f73391831\") " pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:42 crc kubenswrapper[4705]: I0216 15:19:42.986084 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.219997 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325632 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325845 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325867 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.325914 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.326917 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.327457 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") pod \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\" (UID: \"33cb0a6c-7599-4301-b7f4-630b9ccfdf42\") " Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.332662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr" (OuterVolumeSpecName: "kube-api-access-5z7sr") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "kube-api-access-5z7sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.337291 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z7sr\" (UniqueName: \"kubernetes.io/projected/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-kube-api-access-5z7sr\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.449587 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.450798 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.450897 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config" (OuterVolumeSpecName: "config") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.464320 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.466825 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33cb0a6c-7599-4301-b7f4-630b9ccfdf42" (UID: "33cb0a6c-7599-4301-b7f4-630b9ccfdf42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545619 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545672 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545687 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545701 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.545713 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33cb0a6c-7599-4301-b7f4-630b9ccfdf42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.555855 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-l9dk8"] Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.722825 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerStarted","Data":"5da80254409d1f5702b9b50ca3cf24d99fa5621b6bbfa7fd535c598b1f8d5c4c"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.725442 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" exitCode=0 Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.725499 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.731634 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" event={"ID":"33cb0a6c-7599-4301-b7f4-630b9ccfdf42","Type":"ContainerDied","Data":"fd288e684e0a43e4b376cb33683431b8af354b638eab9d3f39fe75d11b79e614"} Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.731694 4705 scope.go:117] "RemoveContainer" containerID="44229c16dd4052675ac541b69178773030255dd4012f291db029d9bed3fffff7" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.732813 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.776900 4705 scope.go:117] "RemoveContainer" containerID="6eed687bcb719d3e812c0d5596618acff3bcb4d19391166e9b43a17a41b58c2d" Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.808193 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:43 crc kubenswrapper[4705]: I0216 15:19:43.824620 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-t6qzx"] Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.435432 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" path="/var/lib/kubelet/pods/33cb0a6c-7599-4301-b7f4-630b9ccfdf42/volumes" Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.750014 4705 generic.go:334] "Generic (PLEG): container finished" podID="414f383c-09a6-4895-81cc-e12f73391831" containerID="a4367fb47635f9d5624022d97a599f3c7e514c4f22ebb280fe343935e0e53ac2" exitCode=0 Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.750075 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerDied","Data":"a4367fb47635f9d5624022d97a599f3c7e514c4f22ebb280fe343935e0e53ac2"} Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.754925 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerStarted","Data":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} Feb 16 15:19:44 crc kubenswrapper[4705]: I0216 15:19:44.825464 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkkhj" podStartSLOduration=3.373897167 podStartE2EDuration="5.825443802s" podCreationTimestamp="2026-02-16 15:19:39 +0000 UTC" firstStartedPulling="2026-02-16 15:19:41.68455024 +0000 UTC m=+1575.869527316" lastFinishedPulling="2026-02-16 15:19:44.136096875 +0000 UTC m=+1578.321073951" observedRunningTime="2026-02-16 15:19:44.822060856 +0000 UTC m=+1579.007037942" watchObservedRunningTime="2026-02-16 15:19:44.825443802 +0000 UTC m=+1579.010420898" Feb 16 15:19:45 crc kubenswrapper[4705]: E0216 15:19:45.442259 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:19:45 crc kubenswrapper[4705]: I0216 15:19:45.769117 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" event={"ID":"414f383c-09a6-4895-81cc-e12f73391831","Type":"ContainerStarted","Data":"5dc53870a6819e03dc212784d395a4a5c246cb7933c229fdea896abac87855f2"} Feb 16 15:19:45 crc kubenswrapper[4705]: I0216 15:19:45.795385 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" podStartSLOduration=3.795347276 podStartE2EDuration="3.795347276s" podCreationTimestamp="2026-02-16 15:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:19:45.793323319 +0000 UTC m=+1579.978300405" watchObservedRunningTime="2026-02-16 15:19:45.795347276 +0000 UTC m=+1579.980324352" Feb 16 15:19:46 crc kubenswrapper[4705]: I0216 15:19:46.429433 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:19:46 crc kubenswrapper[4705]: E0216 15:19:46.429760 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:19:46 crc kubenswrapper[4705]: I0216 15:19:46.792527 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.886351 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.887292 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:49 crc kubenswrapper[4705]: I0216 15:19:49.943013 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:50 crc kubenswrapper[4705]: I0216 15:19:50.935034 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:51 crc kubenswrapper[4705]: I0216 15:19:51.041632 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:52 crc kubenswrapper[4705]: I0216 15:19:52.887465 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkkhj" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" containerID="cri-o://873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" gracePeriod=2 Feb 16 15:19:52 crc kubenswrapper[4705]: I0216 15:19:52.987977 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-l9dk8" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.104221 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.104729 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" containerID="cri-o://e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" gracePeriod=10 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.689457 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788235 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.788498 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") pod \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\" (UID: \"cf747da3-b6aa-42f8-8339-fd2189d24bd0\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.789498 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities" (OuterVolumeSpecName: "utilities") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.798129 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576" (OuterVolumeSpecName: "kube-api-access-5l576") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "kube-api-access-5l576". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.812842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.822525 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf747da3-b6aa-42f8-8339-fd2189d24bd0" (UID: "cf747da3-b6aa-42f8-8339-fd2189d24bd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.890888 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.890991 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891124 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891242 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891442 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891496 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.891569 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892280 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892299 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf747da3-b6aa-42f8-8339-fd2189d24bd0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.892313 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l576\" (UniqueName: \"kubernetes.io/projected/cf747da3-b6aa-42f8-8339-fd2189d24bd0-kube-api-access-5l576\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.901035 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5" (OuterVolumeSpecName: "kube-api-access-fmtx5") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "kube-api-access-fmtx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906623 4705 generic.go:334] "Generic (PLEG): container finished" podID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" exitCode=0 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906709 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkkhj" event={"ID":"cf747da3-b6aa-42f8-8339-fd2189d24bd0","Type":"ContainerDied","Data":"9fedf063c07305bad6d26ba4546cbfd3217e5bfdae81adabd26b1c3f57e3a9a3"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.906792 4705 scope.go:117] "RemoveContainer" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.907176 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkkhj" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910464 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" exitCode=0 Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910491 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910507 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" event={"ID":"d8684c18-9b3b-468c-b055-c6bbc838aba7","Type":"ContainerDied","Data":"119dc3565e79f73cc3ad7d0af017acfdec6089f2268b9597292522f805093601"} Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.910616 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-hggbw" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.962902 4705 scope.go:117] "RemoveContainer" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.964844 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.995621 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtx5\" (UniqueName: \"kubernetes.io/projected/d8684c18-9b3b-468c-b055-c6bbc838aba7-kube-api-access-fmtx5\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:53 crc kubenswrapper[4705]: I0216 15:19:53.995677 4705 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.027603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.040194 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkkhj"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.054898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.067329 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.069230 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.080661 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.097018 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config" (OuterVolumeSpecName: "config") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.097988 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") pod \"d8684c18-9b3b-468c-b055-c6bbc838aba7\" (UID: \"d8684c18-9b3b-468c-b055-c6bbc838aba7\") " Feb 16 15:19:54 crc kubenswrapper[4705]: W0216 15:19:54.098226 4705 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d8684c18-9b3b-468c-b055-c6bbc838aba7/volumes/kubernetes.io~configmap/config Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098336 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config" (OuterVolumeSpecName: "config") pod "d8684c18-9b3b-468c-b055-c6bbc838aba7" (UID: "d8684c18-9b3b-468c-b055-c6bbc838aba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098854 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098880 4705 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098894 4705 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098905 4705 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.098913 4705 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d8684c18-9b3b-468c-b055-c6bbc838aba7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.220227 4705 scope.go:117] "RemoveContainer" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287010 4705 scope.go:117] "RemoveContainer" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.287557 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": container with ID starting with 873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd not found: ID does not exist" containerID="873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287592 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd"} err="failed to get container status \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": rpc error: code = NotFound desc = could not find container \"873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd\": container with ID starting with 873e09246f300391f3e2123ccb340518cb03f6fced603e098021651c246afcbd not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287614 4705 scope.go:117] "RemoveContainer" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.287819 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": container with ID starting with 0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b not found: ID does not exist" containerID="0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287839 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b"} err="failed to get container status \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": rpc error: code = NotFound desc = could not find container \"0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b\": container with ID starting with 0317a6dc5ef01930c2e0072fd9f563ba3a7b111f5a16b039b665fc3c7e0d174b not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.287854 4705 scope.go:117] "RemoveContainer" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.288182 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": container with ID starting with 793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26 not found: ID does not exist" containerID="793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.288310 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26"} err="failed to get container status \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": rpc error: code = NotFound desc = could not find container \"793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26\": container with ID starting with 793eaf219489a7dc5d6476abc3a472559a6a34b209f8a47e1fd79bf7178f6c26 not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.288449 4705 scope.go:117] "RemoveContainer" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.299356 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.311444 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-hggbw"] Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.343689 4705 scope.go:117] "RemoveContainer" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372012 4705 scope.go:117] "RemoveContainer" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.372864 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": container with ID starting with e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa not found: ID does not exist" containerID="e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372920 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa"} err="failed to get container status \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": rpc error: code = NotFound desc = could not find container \"e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa\": container with ID starting with e40d129ce885fe37703875ebeb7c12999745051b53c401d498d549983e2e28fa not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.372956 4705 scope.go:117] "RemoveContainer" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: E0216 15:19:54.373439 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": container with ID starting with a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76 not found: ID does not exist" containerID="a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.373464 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76"} err="failed to get container status \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": rpc error: code = NotFound desc = could not find container \"a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76\": container with ID starting with a7badb29726db751fee09a1577e403c60efa70bf17d21ca11d062127f50d5a76 not found: ID does not exist" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.434948 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" path="/var/lib/kubelet/pods/cf747da3-b6aa-42f8-8339-fd2189d24bd0/volumes" Feb 16 15:19:54 crc kubenswrapper[4705]: I0216 15:19:54.435718 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" path="/var/lib/kubelet/pods/d8684c18-9b3b-468c-b055-c6bbc838aba7/volumes" Feb 16 15:19:58 crc kubenswrapper[4705]: E0216 15:19:58.430209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554265 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554715 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.554879 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:59 crc kubenswrapper[4705]: E0216 15:19:59.556188 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:00 crc kubenswrapper[4705]: I0216 15:20:00.420580 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:00 crc kubenswrapper[4705]: E0216 15:20:00.421273 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.329215 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331436 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331468 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331495 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331507 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331519 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331527 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331544 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331552 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331569 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-content" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-content" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331598 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-utilities" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331606 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="extract-utilities" Feb 16 15:20:07 crc kubenswrapper[4705]: E0216 15:20:07.331631 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.331639 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="init" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332018 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf747da3-b6aa-42f8-8339-fd2189d24bd0" containerName="registry-server" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332033 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8684c18-9b3b-468c-b055-c6bbc838aba7" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.332045 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cb0a6c-7599-4301-b7f4-630b9ccfdf42" containerName="dnsmasq-dns" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.333781 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.336150 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.336295 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.337588 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.348735 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.371683 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460267 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460435 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460596 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.460749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564107 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564211 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564841 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.564944 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.576165 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.576287 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.578930 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.585737 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:07 crc kubenswrapper[4705]: I0216 15:20:07.693885 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.138583 4705 generic.go:334] "Generic (PLEG): container finished" podID="f3671c78-83d9-45b6-a869-d08abfa12906" containerID="c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac" exitCode=0 Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.138662 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerDied","Data":"c60cf7b300f19c6c6692e856418236dd8a19116e7d3d027f62ba6710b5671bac"} Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.141152 4705 generic.go:334] "Generic (PLEG): container finished" podID="35504e73-1115-4e30-8ef7-95e85f31eaf6" containerID="fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea" exitCode=0 Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.141192 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerDied","Data":"fef7545d7c2f39215e80c2f1481975dc678006ee7b6d820d9447a191742f14ea"} Feb 16 15:20:08 crc kubenswrapper[4705]: I0216 15:20:08.401420 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7"] Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.157206 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"f3671c78-83d9-45b6-a869-d08abfa12906","Type":"ContainerStarted","Data":"2b5d4c63816c241b28d9efa0f3d9ef3b166de1720523b905fc916740d660f255"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.157939 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.165783 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"35504e73-1115-4e30-8ef7-95e85f31eaf6","Type":"ContainerStarted","Data":"6853c50e72d1c1a33aaeee2eb79f064dad0a0023f92687c42b1df2057faad392"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.166146 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.169160 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerStarted","Data":"adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0"} Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.197740 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.197719952 podStartE2EDuration="38.197719952s" podCreationTimestamp="2026-02-16 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:20:09.19129352 +0000 UTC m=+1603.376270616" watchObservedRunningTime="2026-02-16 15:20:09.197719952 +0000 UTC m=+1603.382697028" Feb 16 15:20:09 crc kubenswrapper[4705]: I0216 15:20:09.246597 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.246570406 podStartE2EDuration="37.246570406s" podCreationTimestamp="2026-02-16 15:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:20:09.226919379 +0000 UTC m=+1603.411896455" watchObservedRunningTime="2026-02-16 15:20:09.246570406 +0000 UTC m=+1603.431547482" Feb 16 15:20:11 crc kubenswrapper[4705]: E0216 15:20:11.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:12 crc kubenswrapper[4705]: I0216 15:20:12.421064 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:12 crc kubenswrapper[4705]: E0216 15:20:12.421658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550307 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550390 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.550585 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:13 crc kubenswrapper[4705]: E0216 15:20:13.551774 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:20 crc kubenswrapper[4705]: I0216 15:20:20.330492 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerStarted","Data":"1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72"} Feb 16 15:20:20 crc kubenswrapper[4705]: I0216 15:20:20.356535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" podStartSLOduration=2.302403037 podStartE2EDuration="13.356513726s" podCreationTimestamp="2026-02-16 15:20:07 +0000 UTC" firstStartedPulling="2026-02-16 15:20:08.409793752 +0000 UTC m=+1602.594770818" lastFinishedPulling="2026-02-16 15:20:19.463904421 +0000 UTC m=+1613.648881507" observedRunningTime="2026-02-16 15:20:20.350254038 +0000 UTC m=+1614.535231124" watchObservedRunningTime="2026-02-16 15:20:20.356513726 +0000 UTC m=+1614.541490802" Feb 16 15:20:21 crc kubenswrapper[4705]: I0216 15:20:21.781651 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 16 15:20:21 crc kubenswrapper[4705]: I0216 15:20:21.885709 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:23 crc kubenswrapper[4705]: I0216 15:20:23.167636 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:20:23 crc kubenswrapper[4705]: I0216 15:20:23.419706 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:23 crc kubenswrapper[4705]: E0216 15:20:23.420081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:24 crc kubenswrapper[4705]: E0216 15:20:24.422946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:25 crc kubenswrapper[4705]: E0216 15:20:25.422053 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:27 crc kubenswrapper[4705]: I0216 15:20:27.078185 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" containerID="cri-o://eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" gracePeriod=604795 Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.746675 4705 scope.go:117] "RemoveContainer" containerID="eda342c5c8c6a51871935a7c42d9108a69f95180c1db4ddf74979e0a43434713" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.783564 4705 scope.go:117] "RemoveContainer" containerID="0612e4fd190e16edf94f100c0cb911943f4b56aaf02aaa8d1073d8e8e6f4c802" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.882131 4705 scope.go:117] "RemoveContainer" containerID="338cf708ba8f10f855855c2179e37cb77b418143d440fdc6a5cda229e650ec37" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.941294 4705 scope.go:117] "RemoveContainer" containerID="b70e5c0615812ff6aed42dcb8e09a0b01754fd31e289a59cfbe7b21ae9cc3afe" Feb 16 15:20:29 crc kubenswrapper[4705]: I0216 15:20:29.978815 4705 scope.go:117] "RemoveContainer" containerID="f6951bab61da5a049a56c33ba93e49df3fdc49b02f25b9de92342c70737b1218" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.014935 4705 scope.go:117] "RemoveContainer" containerID="e19781e10423d51e9d0ddb50f45ae545361f191e04463e485e5d4a1ca06560e1" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.103051 4705 scope.go:117] "RemoveContainer" containerID="cc5c6c10d91867ec0e668fe37ec2a652d379064601d63333e598987b86ebe834" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.145032 4705 scope.go:117] "RemoveContainer" containerID="f5c17e7d39b9ddbcba6b3a6b64fb5b75e17d9532faec51dee99c1ace5575000a" Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.459902 4705 generic.go:334] "Generic (PLEG): container finished" podID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerID="1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72" exitCode=0 Feb 16 15:20:30 crc kubenswrapper[4705]: I0216 15:20:30.459980 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerDied","Data":"1876ff5fd93e1e219015323ca33bede9ed97b798b3572be3fd3f4dde7c3e2f72"} Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.064397 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203654 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203800 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.203952 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") pod \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\" (UID: \"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0\") " Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.215651 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.229176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx" (OuterVolumeSpecName: "kube-api-access-mbnkx") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "kube-api-access-mbnkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.244240 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory" (OuterVolumeSpecName: "inventory") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.250955 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" (UID: "9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308825 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbnkx\" (UniqueName: \"kubernetes.io/projected/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-kube-api-access-mbnkx\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308913 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308931 4705 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.308948 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540610 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" event={"ID":"9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0","Type":"ContainerDied","Data":"adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0"} Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540658 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf58fbd5e38b5411e07f7ddeda61f720afb2ed034692fc1b3b09d54b2b865b0" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.540725 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.657457 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:32 crc kubenswrapper[4705]: E0216 15:20:32.658104 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.658125 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.658335 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.659252 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670178 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670632 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.670816 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.674041 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.691982 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727722 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727857 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.727977 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.830292 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.830899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.831005 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.839627 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.841133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:32 crc kubenswrapper[4705]: I0216 15:20:32.850261 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7zg59\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.018935 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.572402 4705 generic.go:334] "Generic (PLEG): container finished" podID="139788ad-b160-4139-a6af-094e33c581e5" containerID="eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" exitCode=0 Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.572899 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce"} Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.683841 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59"] Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.751309 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.770864 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771149 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771293 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.771416 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.772907 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.772957 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.773042 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.773646 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.775548 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.789844 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.789989 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790126 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790339 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") pod \"139788ad-b160-4139-a6af-094e33c581e5\" (UID: \"139788ad-b160-4139-a6af-094e33c581e5\") " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.790383 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.793220 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794871 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794910 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.794924 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.848993 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.849109 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9" (OuterVolumeSpecName: "kube-api-access-tfsp9") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "kube-api-access-tfsp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.853104 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0" (OuterVolumeSpecName: "persistence") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.854617 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.876950 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data" (OuterVolumeSpecName: "config-data") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899047 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfsp9\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-kube-api-access-tfsp9\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899091 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/139788ad-b160-4139-a6af-094e33c581e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899108 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899152 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") on node \"crc\" " Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899169 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/139788ad-b160-4139-a6af-094e33c581e5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.899182 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.933332 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.973879 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:20:33 crc kubenswrapper[4705]: I0216 15:20:33.974119 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0") on node "crc" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.006122 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/139788ad-b160-4139-a6af-094e33c581e5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.006179 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.047439 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "139788ad-b160-4139-a6af-094e33c581e5" (UID: "139788ad-b160-4139-a6af-094e33c581e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.109241 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/139788ad-b160-4139-a6af-094e33c581e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.590351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerStarted","Data":"30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.590887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerStarted","Data":"32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"139788ad-b160-4139-a6af-094e33c581e5","Type":"ContainerDied","Data":"ad93a17a230e0f89ffb728c848e626d65cc868f03d8c72f03802d0c82854159a"} Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593345 4705 scope.go:117] "RemoveContainer" containerID="eebb0ead065499915d7a7044c050bea4c8e0517ce9b75b4f679fb68063b8e5ce" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.593492 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.635410 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" podStartSLOduration=2.2049889990000002 podStartE2EDuration="2.63538559s" podCreationTimestamp="2026-02-16 15:20:32 +0000 UTC" firstStartedPulling="2026-02-16 15:20:33.695333911 +0000 UTC m=+1627.880310997" lastFinishedPulling="2026-02-16 15:20:34.125730512 +0000 UTC m=+1628.310707588" observedRunningTime="2026-02-16 15:20:34.61491952 +0000 UTC m=+1628.799896606" watchObservedRunningTime="2026-02-16 15:20:34.63538559 +0000 UTC m=+1628.820362666" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.647263 4705 scope.go:117] "RemoveContainer" containerID="c45bc0861e5e942a3fddb03b7864490ab4f0322209d56a4aa3501d6face13652" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.663767 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.687335 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.705506 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: E0216 15:20:34.706270 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="setup-container" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706289 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="setup-container" Feb 16 15:20:34 crc kubenswrapper[4705]: E0216 15:20:34.706351 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706359 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.706659 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.708200 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.718445 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830574 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830627 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830705 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830732 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830774 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830818 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830884 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830909 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830950 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.830974 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.831021 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934057 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934109 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934176 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934198 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934280 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934326 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934399 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934417 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.934459 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.935472 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.936009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-config-data\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.937133 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-server-conf\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.937797 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.938063 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945033 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-pod-info\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945341 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.945687 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.947929 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.947979 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75a91b98174d7040097f89a93bfd5946d971fbacf68f20932d87234b8e73eca0/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 16 15:20:34 crc kubenswrapper[4705]: I0216 15:20:34.961309 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp84p\" (UniqueName: \"kubernetes.io/projected/3e86fa10-e583-4f86-97f5-e95ec2c9e9e0-kube-api-access-tp84p\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.021408 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fdb50a9-f849-49e0-8ba9-dd211135add0\") pod \"rabbitmq-server-1\" (UID: \"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0\") " pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.031207 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 16 15:20:35 crc kubenswrapper[4705]: E0216 15:20:35.424041 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:35 crc kubenswrapper[4705]: I0216 15:20:35.642904 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.420123 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:36 crc kubenswrapper[4705]: E0216 15:20:36.420813 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.447656 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139788ad-b160-4139-a6af-094e33c581e5" path="/var/lib/kubelet/pods/139788ad-b160-4139-a6af-094e33c581e5/volumes" Feb 16 15:20:36 crc kubenswrapper[4705]: I0216 15:20:36.623029 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"9252c67adb26afbb27ee35987fff52022c14379f791402a435b29a668b7d4162"} Feb 16 15:20:37 crc kubenswrapper[4705]: E0216 15:20:37.421626 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:37 crc kubenswrapper[4705]: I0216 15:20:37.643860 4705 generic.go:334] "Generic (PLEG): container finished" podID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerID="30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e" exitCode=0 Feb 16 15:20:37 crc kubenswrapper[4705]: I0216 15:20:37.643946 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerDied","Data":"30d060056f13bbfbd9ccded1068f9b818cfdcf84c65f6d49b4c123711de7d04e"} Feb 16 15:20:38 crc kubenswrapper[4705]: I0216 15:20:38.661401 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9"} Feb 16 15:20:38 crc kubenswrapper[4705]: I0216 15:20:38.703285 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="139788ad-b160-4139-a6af-094e33c581e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: i/o timeout" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.276270 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.403541 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.403986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.404093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") pod \"c73749fc-8501-405f-bd7e-de9fca2d968a\" (UID: \"c73749fc-8501-405f-bd7e-de9fca2d968a\") " Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.417133 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n" (OuterVolumeSpecName: "kube-api-access-hgt6n") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "kube-api-access-hgt6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.449172 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.459175 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory" (OuterVolumeSpecName: "inventory") pod "c73749fc-8501-405f-bd7e-de9fca2d968a" (UID: "c73749fc-8501-405f-bd7e-de9fca2d968a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509410 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509447 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c73749fc-8501-405f-bd7e-de9fca2d968a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.509458 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgt6n\" (UniqueName: \"kubernetes.io/projected/c73749fc-8501-405f-bd7e-de9fca2d968a-kube-api-access-hgt6n\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.694135 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.695500 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7zg59" event={"ID":"c73749fc-8501-405f-bd7e-de9fca2d968a","Type":"ContainerDied","Data":"32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14"} Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.695553 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f21ecbb184ef9bff3f82a607d8e5ad680acdc96d2e91856571eff05a285b14" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.783202 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:39 crc kubenswrapper[4705]: E0216 15:20:39.783974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.783995 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.784284 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73749fc-8501-405f-bd7e-de9fca2d968a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.785403 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.788011 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791358 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791482 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.791383 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.806812 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922252 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922715 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922765 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:39 crc kubenswrapper[4705]: I0216 15:20:39.922800 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026833 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026914 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.026986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.027038 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.031872 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.032691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.047056 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.050169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.112852 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:20:40 crc kubenswrapper[4705]: I0216 15:20:40.980640 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t"] Feb 16 15:20:40 crc kubenswrapper[4705]: W0216 15:20:40.983064 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6ba4a0_6ae7_42c6_9d27_cb62696d2c85.slice/crio-431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405 WatchSource:0}: Error finding container 431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405: Status 404 returned error can't find the container with id 431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405 Feb 16 15:20:41 crc kubenswrapper[4705]: I0216 15:20:41.723291 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerStarted","Data":"431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405"} Feb 16 15:20:43 crc kubenswrapper[4705]: I0216 15:20:43.755407 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerStarted","Data":"e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449"} Feb 16 15:20:43 crc kubenswrapper[4705]: I0216 15:20:43.785071 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" podStartSLOduration=3.03453318 podStartE2EDuration="4.785039541s" podCreationTimestamp="2026-02-16 15:20:39 +0000 UTC" firstStartedPulling="2026-02-16 15:20:40.988219857 +0000 UTC m=+1635.173196943" lastFinishedPulling="2026-02-16 15:20:42.738726218 +0000 UTC m=+1636.923703304" observedRunningTime="2026-02-16 15:20:43.774338478 +0000 UTC m=+1637.959315554" watchObservedRunningTime="2026-02-16 15:20:43.785039541 +0000 UTC m=+1637.970016647" Feb 16 15:20:47 crc kubenswrapper[4705]: E0216 15:20:47.424923 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570040 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570464 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.570622 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:48 crc kubenswrapper[4705]: E0216 15:20:48.572699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:20:50 crc kubenswrapper[4705]: I0216 15:20:50.420070 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:20:50 crc kubenswrapper[4705]: E0216 15:20:50.421161 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.539153 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.539945 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.540090 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:20:58 crc kubenswrapper[4705]: E0216 15:20:58.541847 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:02 crc kubenswrapper[4705]: E0216 15:21:02.424920 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:03 crc kubenswrapper[4705]: I0216 15:21:03.420714 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:03 crc kubenswrapper[4705]: E0216 15:21:03.421476 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:10 crc kubenswrapper[4705]: E0216 15:21:10.424012 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:11 crc kubenswrapper[4705]: I0216 15:21:11.145883 4705 generic.go:334] "Generic (PLEG): container finished" podID="3e86fa10-e583-4f86-97f5-e95ec2c9e9e0" containerID="4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9" exitCode=0 Feb 16 15:21:11 crc kubenswrapper[4705]: I0216 15:21:11.145998 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerDied","Data":"4889bc008f884b027c0a7a92ff8bbfd0547ce687450d7545c32f2bdf009295b9"} Feb 16 15:21:12 crc kubenswrapper[4705]: I0216 15:21:12.162265 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"3e86fa10-e583-4f86-97f5-e95ec2c9e9e0","Type":"ContainerStarted","Data":"2f9967b51caa448c77442bfa47901aa5cb2237ddbe6775da90e5595999d18128"} Feb 16 15:21:12 crc kubenswrapper[4705]: I0216 15:21:12.163051 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 16 15:21:14 crc kubenswrapper[4705]: E0216 15:21:14.423446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:14 crc kubenswrapper[4705]: I0216 15:21:14.456417 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.456396313 podStartE2EDuration="40.456396313s" podCreationTimestamp="2026-02-16 15:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:21:12.194162935 +0000 UTC m=+1666.379140031" watchObservedRunningTime="2026-02-16 15:21:14.456396313 +0000 UTC m=+1668.641373399" Feb 16 15:21:17 crc kubenswrapper[4705]: I0216 15:21:17.420226 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:17 crc kubenswrapper[4705]: E0216 15:21:17.421292 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:23 crc kubenswrapper[4705]: E0216 15:21:23.425306 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:25 crc kubenswrapper[4705]: I0216 15:21:25.035035 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 16 15:21:25 crc kubenswrapper[4705]: I0216 15:21:25.121517 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:28 crc kubenswrapper[4705]: E0216 15:21:28.429730 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.161141 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" containerID="cri-o://ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" gracePeriod=604795 Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.562517 4705 scope.go:117] "RemoveContainer" containerID="7ff5e61a38310582085a72b8f58aa1b56f16c702a01b7dce04612b124d545df9" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.611254 4705 scope.go:117] "RemoveContainer" containerID="72eb1ef184be31aa6e604bc1b1e7ef2a67bc265c5ddd264b807efbf4b1b61b79" Feb 16 15:21:30 crc kubenswrapper[4705]: I0216 15:21:30.659322 4705 scope.go:117] "RemoveContainer" containerID="baa2831e35077fa704a32b810c85079d3310969dea312c19a9de3b1a5f7540ac" Feb 16 15:21:32 crc kubenswrapper[4705]: I0216 15:21:32.421166 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:32 crc kubenswrapper[4705]: E0216 15:21:32.422339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:33 crc kubenswrapper[4705]: I0216 15:21:33.336122 4705 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.482725 4705 generic.go:334] "Generic (PLEG): container finished" podID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerID="ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" exitCode=0 Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.483313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30"} Feb 16 15:21:36 crc kubenswrapper[4705]: I0216 15:21:36.919421 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.096478 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.096991 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097022 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097179 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097391 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.097417 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098027 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098070 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098184 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098229 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.098278 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") pod \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\" (UID: \"3ba19f15-a399-4d4b-bf32-a2a870a660e5\") " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.105285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.109297 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.110886 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.117102 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.123929 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.152135 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c" (OuterVolumeSpecName: "persistence") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.162884 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.164454 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j" (OuterVolumeSpecName: "kube-api-access-pd25j") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "kube-api-access-pd25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.193928 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data" (OuterVolumeSpecName: "config-data") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205406 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205444 4705 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3ba19f15-a399-4d4b-bf32-a2a870a660e5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205456 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd25j\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-kube-api-access-pd25j\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205490 4705 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") on node \"crc\" " Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205503 4705 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205514 4705 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3ba19f15-a399-4d4b-bf32-a2a870a660e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205522 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205532 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.205542 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.218976 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.243116 4705 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.243295 4705 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c") on node "crc" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.295176 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3ba19f15-a399-4d4b-bf32-a2a870a660e5" (UID: "3ba19f15-a399-4d4b-bf32-a2a870a660e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308569 4705 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3ba19f15-a399-4d4b-bf32-a2a870a660e5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308616 4705 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3ba19f15-a399-4d4b-bf32-a2a870a660e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.308630 4705 reconciler_common.go:293] "Volume detached for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.498901 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3ba19f15-a399-4d4b-bf32-a2a870a660e5","Type":"ContainerDied","Data":"c10aeda896c97ab2b56b22cb8e034aaa58126bfac49a954b06a32ef9f4316ccc"} Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.498967 4705 scope.go:117] "RemoveContainer" containerID="ddc79c616a980da9bec5ac9f7c1b7626ab1ecb622f323dda933da451c9482f30" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.499007 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.541193 4705 scope.go:117] "RemoveContainer" containerID="86e9ac4153a2ccf0f2f0a689cbb68d98c66cd9f62606340a11ddf8bd0f8e2f02" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.550419 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.566098 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.642563 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: E0216 15:21:37.643536 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.643557 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: E0216 15:21:37.643576 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="setup-container" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.643582 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="setup-container" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.646298 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" containerName="rabbitmq" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.654588 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.660611 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728543 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728595 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728650 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728749 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728848 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.728907 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729146 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729313 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729431 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.729809 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832495 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832553 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832585 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832611 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832646 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832686 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832719 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832779 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832819 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832912 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.832935 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.833938 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.834240 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.835024 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af0e4de4-5af4-4d5c-b2c4-963771612f94-config-data\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.835855 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.836787 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.840202 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.841163 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.848986 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af0e4de4-5af4-4d5c-b2c4-963771612f94-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866050 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f49nk\" (UniqueName: \"kubernetes.io/projected/af0e4de4-5af4-4d5c-b2c4-963771612f94-kube-api-access-f49nk\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866479 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af0e4de4-5af4-4d5c-b2c4-963771612f94-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866898 4705 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.866927 4705 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6913a5af6e0b901f5e41cc9da5820d3446361504ddf8a58e3143477836427e51/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.980169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8133d6e6-bdc9-4aa6-8bae-fa1f86885a3c\") pod \"rabbitmq-server-0\" (UID: \"af0e4de4-5af4-4d5c-b2c4-963771612f94\") " pod="openstack/rabbitmq-server-0" Feb 16 15:21:37 crc kubenswrapper[4705]: I0216 15:21:37.994662 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:21:38 crc kubenswrapper[4705]: E0216 15:21:38.422925 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:38 crc kubenswrapper[4705]: I0216 15:21:38.434430 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba19f15-a399-4d4b-bf32-a2a870a660e5" path="/var/lib/kubelet/pods/3ba19f15-a399-4d4b-bf32-a2a870a660e5/volumes" Feb 16 15:21:38 crc kubenswrapper[4705]: I0216 15:21:38.515069 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:21:39 crc kubenswrapper[4705]: I0216 15:21:39.527231 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"73c4bd308c39c7d0431f11bc6afcb72243dfef0a42d552bb8c6fdc299c566e41"} Feb 16 15:21:41 crc kubenswrapper[4705]: E0216 15:21:41.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:41 crc kubenswrapper[4705]: I0216 15:21:41.552341 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc"} Feb 16 15:21:47 crc kubenswrapper[4705]: I0216 15:21:47.420861 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:47 crc kubenswrapper[4705]: E0216 15:21:47.422402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:21:52 crc kubenswrapper[4705]: E0216 15:21:52.435150 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:21:53 crc kubenswrapper[4705]: E0216 15:21:53.421317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:21:58 crc kubenswrapper[4705]: I0216 15:21:58.421160 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:21:58 crc kubenswrapper[4705]: E0216 15:21:58.422318 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:05 crc kubenswrapper[4705]: E0216 15:22:05.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:06 crc kubenswrapper[4705]: E0216 15:22:06.435576 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:11 crc kubenswrapper[4705]: I0216 15:22:11.420709 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:11 crc kubenswrapper[4705]: E0216 15:22:11.422761 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:14 crc kubenswrapper[4705]: I0216 15:22:14.062708 4705 generic.go:334] "Generic (PLEG): container finished" podID="af0e4de4-5af4-4d5c-b2c4-963771612f94" containerID="dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc" exitCode=0 Feb 16 15:22:14 crc kubenswrapper[4705]: I0216 15:22:14.062791 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerDied","Data":"dd038952ef5a63ef81d9dbbf032a40826c341d8aba2d1047ba3856923f4222fc"} Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.079764 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af0e4de4-5af4-4d5c-b2c4-963771612f94","Type":"ContainerStarted","Data":"641c1288c2d276fed0c1ca32e80eec0e24c5856c3ff63e7450bb313b86eeca4b"} Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.080907 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:22:15 crc kubenswrapper[4705]: I0216 15:22:15.125519 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.125483677 podStartE2EDuration="38.125483677s" podCreationTimestamp="2026-02-16 15:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:22:15.116095211 +0000 UTC m=+1729.301072307" watchObservedRunningTime="2026-02-16 15:22:15.125483677 +0000 UTC m=+1729.310460763" Feb 16 15:22:17 crc kubenswrapper[4705]: E0216 15:22:17.423904 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544547 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544650 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.544825 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:22:19 crc kubenswrapper[4705]: E0216 15:22:19.546008 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:24 crc kubenswrapper[4705]: I0216 15:22:24.420853 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:24 crc kubenswrapper[4705]: E0216 15:22:24.422655 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:28 crc kubenswrapper[4705]: I0216 15:22:27.999628 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539269 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539358 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.539535 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:22:28 crc kubenswrapper[4705]: E0216 15:22:28.540702 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:30 crc kubenswrapper[4705]: E0216 15:22:30.425390 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:39 crc kubenswrapper[4705]: I0216 15:22:39.420796 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:39 crc kubenswrapper[4705]: E0216 15:22:39.422576 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:42 crc kubenswrapper[4705]: E0216 15:22:42.425011 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:45 crc kubenswrapper[4705]: E0216 15:22:45.423032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:22:52 crc kubenswrapper[4705]: I0216 15:22:52.420614 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:22:52 crc kubenswrapper[4705]: E0216 15:22:52.422130 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:22:53 crc kubenswrapper[4705]: I0216 15:22:53.062093 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:22:53 crc kubenswrapper[4705]: I0216 15:22:53.076895 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7hxxb"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.065091 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.080464 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.098979 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.113563 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.126197 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zf4nh"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.140425 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n5lkc"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.153429 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0063-account-create-update-4tnvs"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.180445 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-340c-account-create-update-htclx"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.204128 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.223970 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78e4-account-create-update-475d7"] Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.439587 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f443bcd-c93f-4b89-a048-cc92f28f5854" path="/var/lib/kubelet/pods/3f443bcd-c93f-4b89-a048-cc92f28f5854/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.442701 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca" path="/var/lib/kubelet/pods/69dc1cb8-baeb-4aa6-9f3e-f0cc8e8a0dca/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.444762 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a486f037-5709-4199-9f76-0cb0c995af25" path="/var/lib/kubelet/pods/a486f037-5709-4199-9f76-0cb0c995af25/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.446198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2232806-cac7-4787-839b-9bcecac93820" path="/var/lib/kubelet/pods/b2232806-cac7-4787-839b-9bcecac93820/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.449128 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cace81ee-1e82-4eb9-b5fa-7837c7dc69bc" path="/var/lib/kubelet/pods/cace81ee-1e82-4eb9-b5fa-7837c7dc69bc/volumes" Feb 16 15:22:54 crc kubenswrapper[4705]: I0216 15:22:54.450732 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37b9312-710d-49b4-8cc7-3956df176627" path="/var/lib/kubelet/pods/f37b9312-710d-49b4-8cc7-3956df176627/volumes" Feb 16 15:22:55 crc kubenswrapper[4705]: E0216 15:22:55.424003 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:22:56 crc kubenswrapper[4705]: E0216 15:22:56.435403 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.059272 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.077876 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.095775 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.106688 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a6ad-account-create-update-f24b2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.117669 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gg5c2"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.138592 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-2xsdv"] Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.420382 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:04 crc kubenswrapper[4705]: E0216 15:23:04.420774 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.455973 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7" path="/var/lib/kubelet/pods/19fdf9d1-1dc8-41b3-825e-1ba5f9e9b4f7/volumes" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.466396 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a2df1c-b87d-4765-b900-e6b165802be2" path="/var/lib/kubelet/pods/45a2df1c-b87d-4765-b900-e6b165802be2/volumes" Feb 16 15:23:04 crc kubenswrapper[4705]: I0216 15:23:04.468635 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5de6a8-c858-4f91-8833-e012562ee1a3" path="/var/lib/kubelet/pods/5c5de6a8-c858-4f91-8833-e012562ee1a3/volumes" Feb 16 15:23:05 crc kubenswrapper[4705]: I0216 15:23:05.044845 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:23:05 crc kubenswrapper[4705]: I0216 15:23:05.064865 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-baa1-account-create-update-4xrwg"] Feb 16 15:23:06 crc kubenswrapper[4705]: I0216 15:23:06.436882 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c074c5c-fae9-49f3-8139-adb92b649951" path="/var/lib/kubelet/pods/3c074c5c-fae9-49f3-8139-adb92b649951/volumes" Feb 16 15:23:07 crc kubenswrapper[4705]: E0216 15:23:07.424333 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:10 crc kubenswrapper[4705]: E0216 15:23:10.423579 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:15 crc kubenswrapper[4705]: I0216 15:23:15.042010 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:23:15 crc kubenswrapper[4705]: I0216 15:23:15.056031 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lz7zl"] Feb 16 15:23:16 crc kubenswrapper[4705]: I0216 15:23:16.435953 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68c2080-dd84-406b-ba19-b4cdd136c90e" path="/var/lib/kubelet/pods/b68c2080-dd84-406b-ba19-b4cdd136c90e/volumes" Feb 16 15:23:18 crc kubenswrapper[4705]: I0216 15:23:18.420423 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:18 crc kubenswrapper[4705]: E0216 15:23:18.421837 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:20 crc kubenswrapper[4705]: E0216 15:23:20.422444 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:21 crc kubenswrapper[4705]: E0216 15:23:21.423027 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.420339 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:30 crc kubenswrapper[4705]: E0216 15:23:30.422290 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.894776 4705 scope.go:117] "RemoveContainer" containerID="8cbd1af309adfc1dafcf0ea3d77759d2f86265b9808b0b7435417bb754ee409d" Feb 16 15:23:30 crc kubenswrapper[4705]: I0216 15:23:30.947452 4705 scope.go:117] "RemoveContainer" containerID="48e73b7a2e49fe1ae452d57c429665b68c5000f5389968e1e6b8065a7ce17b47" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.029973 4705 scope.go:117] "RemoveContainer" containerID="e2ac5205d4a22308f913bec93b73c5aa9942844a6633ab0df0a4c46c0609f37a" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.065735 4705 scope.go:117] "RemoveContainer" containerID="e75206ab14fb3712b094ac170d341a1c3364f06bb8b3dfb2b35e1aa8ca3e80f3" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.098167 4705 scope.go:117] "RemoveContainer" containerID="5596960c9342b06a59fbf2992d6d97a46e0198640a405e791967558c0f6addd2" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.168853 4705 scope.go:117] "RemoveContainer" containerID="2f79d797c3129ced8ee4fbe01de9894c6da786bc25e0e54f5445a9d4c4891698" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.213775 4705 scope.go:117] "RemoveContainer" containerID="5297f3386efbde9d5a58546d4fc2397672bac40dc5cdf3c17082d57b2647467b" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.280468 4705 scope.go:117] "RemoveContainer" containerID="55a8a589929400f0bdc43a4b2e65afccb3545d7c47842f8b1d91a93888750508" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.446555 4705 scope.go:117] "RemoveContainer" containerID="0017c5743d3acab30b80453ad1028a61abdf169aafcd88d8f11df99404053765" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.494597 4705 generic.go:334] "Generic (PLEG): container finished" podID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerID="e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449" exitCode=0 Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.494685 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerDied","Data":"e6cc743d4ef1f73713fbb9c6a811713740425faca4b1cb39c8806738ea026449"} Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.545917 4705 scope.go:117] "RemoveContainer" containerID="d200b7c2e16f651dc486f4322085e2d7e7499ef7b85b5e81ebde83ca03928405" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.570891 4705 scope.go:117] "RemoveContainer" containerID="f4d4c2e298c4ba6337b8d63f488fb5af7c133674755bc78855aa9149d62ea38c" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.592283 4705 scope.go:117] "RemoveContainer" containerID="931b20b998ef273223e9f5d6e3f1f3e4584cf0ee619597e2b65633773ea18c75" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.621174 4705 scope.go:117] "RemoveContainer" containerID="e22a4e97a46141c555ff698e641012530b3f1b9226d8679c4a611d3291ce6a4f" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.654912 4705 scope.go:117] "RemoveContainer" containerID="a6d8674e75cd34a23ae23cec074aadbd60e573be5fb8f1c35656725571554e5a" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.676957 4705 scope.go:117] "RemoveContainer" containerID="dd029ef787696a45ee8492edb3333989fffcd24f678a6be5d379b152c19ca553" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.708618 4705 scope.go:117] "RemoveContainer" containerID="18ae1c633d349b8c0b020bf752fc9e39aa39bfd26d6690fc4fca07118b69dd82" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.741514 4705 scope.go:117] "RemoveContainer" containerID="637fdfe934e0a8bf8ac98354b828f25afaaf9adfd49811868d5e08eb7725c1e1" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.764633 4705 scope.go:117] "RemoveContainer" containerID="56597cab99100354dba4a82ea8867c6ff59a4b68e68ff8f6fa9c785b02526e30" Feb 16 15:23:31 crc kubenswrapper[4705]: I0216 15:23:31.785563 4705 scope.go:117] "RemoveContainer" containerID="65b95c950083c9aeb3e3619fc2bb885d98f3037af8bdbac9d4afb42843773d92" Feb 16 15:23:32 crc kubenswrapper[4705]: E0216 15:23:32.423570 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.030262 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.119651 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.119839 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.120230 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.120341 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") pod \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\" (UID: \"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85\") " Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.129023 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.129784 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt" (OuterVolumeSpecName: "kube-api-access-bjfnt") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "kube-api-access-bjfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.162870 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory" (OuterVolumeSpecName: "inventory") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.173478 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" (UID: "ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228120 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228204 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228237 4705 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.228263 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjfnt\" (UniqueName: \"kubernetes.io/projected/ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85-kube-api-access-bjfnt\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.553992 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" event={"ID":"ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85","Type":"ContainerDied","Data":"431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405"} Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.554910 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431fd67365b9a3301e29f83e7408e69b8e01a6ccf2f9ca10eeed8863fd854405" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.554132 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.681292 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:33 crc kubenswrapper[4705]: E0216 15:23:33.682043 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.682067 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.682316 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.683447 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.690503 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.693870 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.694907 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.695311 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.697562 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.745567 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.746127 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.746303 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848128 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.848784 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.854474 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.854749 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:33 crc kubenswrapper[4705]: I0216 15:23:33.879713 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-drn5g\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:34 crc kubenswrapper[4705]: I0216 15:23:34.013489 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:23:34 crc kubenswrapper[4705]: E0216 15:23:34.425434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:34 crc kubenswrapper[4705]: I0216 15:23:34.671759 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g"] Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.589164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerStarted","Data":"28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba"} Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.589737 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerStarted","Data":"1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b"} Feb 16 15:23:35 crc kubenswrapper[4705]: I0216 15:23:35.626842 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" podStartSLOduration=2.155345768 podStartE2EDuration="2.626805763s" podCreationTimestamp="2026-02-16 15:23:33 +0000 UTC" firstStartedPulling="2026-02-16 15:23:34.677589904 +0000 UTC m=+1808.862566990" lastFinishedPulling="2026-02-16 15:23:35.149049899 +0000 UTC m=+1809.334026985" observedRunningTime="2026-02-16 15:23:35.610615413 +0000 UTC m=+1809.795592489" watchObservedRunningTime="2026-02-16 15:23:35.626805763 +0000 UTC m=+1809.811782869" Feb 16 15:23:37 crc kubenswrapper[4705]: I0216 15:23:37.067130 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:23:37 crc kubenswrapper[4705]: I0216 15:23:37.082238 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2kkpm"] Feb 16 15:23:38 crc kubenswrapper[4705]: I0216 15:23:38.447851 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eba064a-3f7c-4395-beca-1b77b85e1a29" path="/var/lib/kubelet/pods/1eba064a-3f7c-4395-beca-1b77b85e1a29/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.045187 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.063499 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tr9gx"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.087099 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.099144 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.112101 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.123498 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lqlft"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.140401 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-mdv7p"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.156488 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3bfb-account-create-update-r5cz9"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.173986 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.192105 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fpgrj"] Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.443791 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00962490-7e63-4ba2-95e5-d95167d392bd" path="/var/lib/kubelet/pods/00962490-7e63-4ba2-95e5-d95167d392bd/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.446947 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0216c47c-a1cb-48d7-a1cd-96bc1e7726b5" path="/var/lib/kubelet/pods/0216c47c-a1cb-48d7-a1cd-96bc1e7726b5/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.449974 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f" path="/var/lib/kubelet/pods/6fd7067a-e13b-4f9c-8ad6-ea5064a46b3f/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.450777 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5e7e5c-9868-457d-872b-ec1d3f34449a" path="/var/lib/kubelet/pods/ae5e7e5c-9868-457d-872b-ec1d3f34449a/volumes" Feb 16 15:23:40 crc kubenswrapper[4705]: I0216 15:23:40.452210 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b60553-5a29-4222-ad99-2f33cedd3879" path="/var/lib/kubelet/pods/f5b60553-5a29-4222-ad99-2f33cedd3879/volumes" Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.055394 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.076592 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.091795 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.102969 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fb6f-account-create-update-sg7lm"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.113526 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-56f8-account-create-update-kbzxq"] Feb 16 15:23:41 crc kubenswrapper[4705]: I0216 15:23:41.123437 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ea32-account-create-update-7qwh2"] Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.441931 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104ec45d-e95d-40c0-80a8-d59de9e2d45a" path="/var/lib/kubelet/pods/104ec45d-e95d-40c0-80a8-d59de9e2d45a/volumes" Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.445407 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601c1c55-db3a-443a-bd6b-7d76e884697c" path="/var/lib/kubelet/pods/601c1c55-db3a-443a-bd6b-7d76e884697c/volumes" Feb 16 15:23:42 crc kubenswrapper[4705]: I0216 15:23:42.449748 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d" path="/var/lib/kubelet/pods/cc2bbe98-4ace-4f3a-81c8-5fcdd17fca1d/volumes" Feb 16 15:23:44 crc kubenswrapper[4705]: I0216 15:23:44.419903 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:44 crc kubenswrapper[4705]: E0216 15:23:44.420535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:45 crc kubenswrapper[4705]: E0216 15:23:45.424967 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:23:46 crc kubenswrapper[4705]: E0216 15:23:46.429778 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.056460 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.070145 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gmlkp"] Feb 16 15:23:50 crc kubenswrapper[4705]: I0216 15:23:50.441997 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65b4384-a678-4002-9583-7f89082af14a" path="/var/lib/kubelet/pods/d65b4384-a678-4002-9583-7f89082af14a/volumes" Feb 16 15:23:58 crc kubenswrapper[4705]: E0216 15:23:58.425414 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:23:59 crc kubenswrapper[4705]: I0216 15:23:59.421708 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:23:59 crc kubenswrapper[4705]: E0216 15:23:59.422860 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:23:59 crc kubenswrapper[4705]: E0216 15:23:59.424504 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:10 crc kubenswrapper[4705]: I0216 15:24:10.422853 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:10 crc kubenswrapper[4705]: E0216 15:24:10.423762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:24:10 crc kubenswrapper[4705]: E0216 15:24:10.426482 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:12 crc kubenswrapper[4705]: E0216 15:24:12.422688 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:21 crc kubenswrapper[4705]: I0216 15:24:21.071935 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:24:21 crc kubenswrapper[4705]: I0216 15:24:21.089145 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-76rfw"] Feb 16 15:24:22 crc kubenswrapper[4705]: I0216 15:24:22.420088 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:22 crc kubenswrapper[4705]: E0216 15:24:22.422030 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:24:22 crc kubenswrapper[4705]: I0216 15:24:22.437047 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baaef700-c962-494f-bee0-67990bf8bd84" path="/var/lib/kubelet/pods/baaef700-c962-494f-bee0-67990bf8bd84/volumes" Feb 16 15:24:24 crc kubenswrapper[4705]: E0216 15:24:24.426809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:26 crc kubenswrapper[4705]: E0216 15:24:26.436633 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.047732 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.064678 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f8fxj"] Feb 16 15:24:28 crc kubenswrapper[4705]: I0216 15:24:28.434707 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e652b8a2-fe79-4cdc-b376-c4bc0b85197f" path="/var/lib/kubelet/pods/e652b8a2-fe79-4cdc-b376-c4bc0b85197f/volumes" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.231496 4705 scope.go:117] "RemoveContainer" containerID="ca5ac92a7dc65970aa1597da51d8d235081d2d56a401566acfbc85af5a226fbd" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.271106 4705 scope.go:117] "RemoveContainer" containerID="bdfd63c3ecc1595f3e167fa9202bd03a5c184ef38a3f05f7c5708bbb69702bbe" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.330885 4705 scope.go:117] "RemoveContainer" containerID="15fa487fc78680eebbada617a958beee0dc93fabf1acb0258ad86c6a6637b4a3" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.420265 4705 scope.go:117] "RemoveContainer" containerID="be8b3e0326ea71bbc9f9e87ea816230ad05f7c364ba58e44e8812ca01437d1c1" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.479153 4705 scope.go:117] "RemoveContainer" containerID="2f3be024158b93066d5262e9224908fddecc1a451092d024f7b8f2601466a9b4" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.564283 4705 scope.go:117] "RemoveContainer" containerID="264622adf5af6886a931115cc69de7300b2b26acd7842f92edb4bffbce142d23" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.605280 4705 scope.go:117] "RemoveContainer" containerID="018bf846d7fe64a859e3c5304849a02f3a4179f776cea2e8ccc7acda8fa71421" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.628781 4705 scope.go:117] "RemoveContainer" containerID="9d7693ed517cfe584b58f1eb27ff9e018459aad540cb357f988a64c00e64f25e" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.663258 4705 scope.go:117] "RemoveContainer" containerID="2d2e1b5af863f030f5a82ceae3d64982596f76c2c83b8724fb79e532c3c6c337" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.705147 4705 scope.go:117] "RemoveContainer" containerID="01529216e6cfee37b45daa7e445d747074cda05873b794d38ec8cf37020c339e" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.752107 4705 scope.go:117] "RemoveContainer" containerID="3441f97b82c61443005d5c636ffa1b9046d09392c2db4e6c04fcbda2de0e8e36" Feb 16 15:24:32 crc kubenswrapper[4705]: I0216 15:24:32.785273 4705 scope.go:117] "RemoveContainer" containerID="e15307e3817ddf50b95ef7cb58ca5a91c87caee40526fb238aca09e99fde3e55" Feb 16 15:24:33 crc kubenswrapper[4705]: I0216 15:24:33.421257 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:24:34 crc kubenswrapper[4705]: I0216 15:24:34.450244 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} Feb 16 15:24:35 crc kubenswrapper[4705]: I0216 15:24:35.047412 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:24:35 crc kubenswrapper[4705]: I0216 15:24:35.065617 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m8mrp"] Feb 16 15:24:36 crc kubenswrapper[4705]: I0216 15:24:36.450201 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeee3c96-5da7-42eb-9fd9-07a5f09182d5" path="/var/lib/kubelet/pods/eeee3c96-5da7-42eb-9fd9-07a5f09182d5/volumes" Feb 16 15:24:37 crc kubenswrapper[4705]: E0216 15:24:37.425492 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:39 crc kubenswrapper[4705]: E0216 15:24:39.424979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:24:47 crc kubenswrapper[4705]: I0216 15:24:47.054404 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:24:47 crc kubenswrapper[4705]: I0216 15:24:47.074611 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4vj9p"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.044391 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.056509 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-scncd"] Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.441963 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302aee2f-61be-439f-a04e-356243bb65b6" path="/var/lib/kubelet/pods/302aee2f-61be-439f-a04e-356243bb65b6/volumes" Feb 16 15:24:48 crc kubenswrapper[4705]: I0216 15:24:48.443015 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb24908-6026-4fe7-81b6-345402c9398e" path="/var/lib/kubelet/pods/ddb24908-6026-4fe7-81b6-345402c9398e/volumes" Feb 16 15:24:49 crc kubenswrapper[4705]: E0216 15:24:49.422694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:24:54 crc kubenswrapper[4705]: E0216 15:24:54.423902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:03 crc kubenswrapper[4705]: I0216 15:25:03.422895 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.519901 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.519966 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.520108 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:03 crc kubenswrapper[4705]: E0216 15:25:03.521952 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:06 crc kubenswrapper[4705]: E0216 15:25:06.435667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.556305 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.556911 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.557059 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:17 crc kubenswrapper[4705]: E0216 15:25:17.558445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:19 crc kubenswrapper[4705]: E0216 15:25:19.422528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:31 crc kubenswrapper[4705]: E0216 15:25:31.423666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:32 crc kubenswrapper[4705]: E0216 15:25:32.421679 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.058510 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.072360 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-de3f-account-create-update-d2gp8"] Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.082485 4705 scope.go:117] "RemoveContainer" containerID="a7a5ccb1213e05403b2c609c1d0142378875d98d299f4c29f81e4b95d8d137f8" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.125073 4705 scope.go:117] "RemoveContainer" containerID="99a77b47a3f02f20d1a89b92aa183dce6d0d9402668b42b604a80e789789f55a" Feb 16 15:25:33 crc kubenswrapper[4705]: I0216 15:25:33.183967 4705 scope.go:117] "RemoveContainer" containerID="c4e7cf35ca9cdb1d088afb52cbad0fa1eb61329b9888ee9b04889ba66e69edd4" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.038033 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.048340 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.058237 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2d9b-account-create-update-wlxl6"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.068460 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6nsdt"] Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.433303 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38af35f6-7590-41c4-9442-ec89fe02106f" path="/var/lib/kubelet/pods/38af35f6-7590-41c4-9442-ec89fe02106f/volumes" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.434245 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6fc941-1576-4817-859a-6644349bc8cd" path="/var/lib/kubelet/pods/3c6fc941-1576-4817-859a-6644349bc8cd/volumes" Feb 16 15:25:34 crc kubenswrapper[4705]: I0216 15:25:34.435564 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18d067a-2ef1-4b11-936f-aef7f7910a80" path="/var/lib/kubelet/pods/c18d067a-2ef1-4b11-936f-aef7f7910a80/volumes" Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.045710 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.070448 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.103044 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.129449 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x6wr8"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.157436 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ba40-account-create-update-8d7bg"] Feb 16 15:25:37 crc kubenswrapper[4705]: I0216 15:25:37.175611 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mqnvt"] Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.434777 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a0302cb-f7dd-46d4-8df0-2ab25bddec10" path="/var/lib/kubelet/pods/6a0302cb-f7dd-46d4-8df0-2ab25bddec10/volumes" Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.437290 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2a0a9c-1379-457e-a5e2-537304cfdcff" path="/var/lib/kubelet/pods/7b2a0a9c-1379-457e-a5e2-537304cfdcff/volumes" Feb 16 15:25:38 crc kubenswrapper[4705]: I0216 15:25:38.438278 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b468686-b5ab-423d-a720-a2c77aed457f" path="/var/lib/kubelet/pods/8b468686-b5ab-423d-a720-a2c77aed457f/volumes" Feb 16 15:25:42 crc kubenswrapper[4705]: E0216 15:25:42.422562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:25:46 crc kubenswrapper[4705]: E0216 15:25:46.430855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:25:57 crc kubenswrapper[4705]: E0216 15:25:57.424426 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:00 crc kubenswrapper[4705]: E0216 15:26:00.436490 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:10 crc kubenswrapper[4705]: E0216 15:26:10.423466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:14 crc kubenswrapper[4705]: E0216 15:26:14.426306 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.071193 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.087578 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-sz8ws"] Feb 16 15:26:18 crc kubenswrapper[4705]: I0216 15:26:18.445172 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06284688-bd14-48ff-adf1-d0dc441d1238" path="/var/lib/kubelet/pods/06284688-bd14-48ff-adf1-d0dc441d1238/volumes" Feb 16 15:26:22 crc kubenswrapper[4705]: E0216 15:26:22.423759 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:28 crc kubenswrapper[4705]: E0216 15:26:28.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.377831 4705 scope.go:117] "RemoveContainer" containerID="5298d8d4bbe490dcf8fd4d8c8fd18c95543c555b9240d37267fbfc9891ee3207" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.423037 4705 scope.go:117] "RemoveContainer" containerID="b6ff178ee59d258cd0a815ddbd0d83ca22d1d8fd5e5badc95b33346ac9ac1dd2" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.498153 4705 scope.go:117] "RemoveContainer" containerID="fa03ffbdc99df54493084bdd802dfc7cc972f18375229d2457f61f8fa6ea18b6" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.558706 4705 scope.go:117] "RemoveContainer" containerID="e8a382be23bea794eda4951ad147e8a541ec0cf46557fafa0b29ca1f74d84546" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.619026 4705 scope.go:117] "RemoveContainer" containerID="624e47298bbfcaa05f1d1cb521cf8da9b7629abb98c32b57ca82484813d5a2ce" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.682357 4705 scope.go:117] "RemoveContainer" containerID="85317c63c64342b640443d7128098cf7e3a161e71ceb14f41123a4cc90d3489a" Feb 16 15:26:33 crc kubenswrapper[4705]: I0216 15:26:33.763404 4705 scope.go:117] "RemoveContainer" containerID="8727f6608d01bea1d2d092cb593cbdfdbcf01d7388fded5a43fcf9ca1545112c" Feb 16 15:26:34 crc kubenswrapper[4705]: E0216 15:26:34.423202 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:39 crc kubenswrapper[4705]: E0216 15:26:39.423524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.066669 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.084029 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.097301 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1473-account-create-update-mpxtv"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.108590 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-sz982"] Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.450998 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481dd88a-36b9-432c-9d21-9221f5e98e6e" path="/var/lib/kubelet/pods/481dd88a-36b9-432c-9d21-9221f5e98e6e/volumes" Feb 16 15:26:44 crc kubenswrapper[4705]: I0216 15:26:44.451945 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885bde30-8f11-4a3f-b1ed-db26e4aa4ab2" path="/var/lib/kubelet/pods/885bde30-8f11-4a3f-b1ed-db26e4aa4ab2/volumes" Feb 16 15:26:46 crc kubenswrapper[4705]: E0216 15:26:46.435088 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:26:52 crc kubenswrapper[4705]: E0216 15:26:52.423273 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:26:59 crc kubenswrapper[4705]: I0216 15:26:59.057654 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:26:59 crc kubenswrapper[4705]: I0216 15:26:59.072394 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6brrx"] Feb 16 15:26:59 crc kubenswrapper[4705]: E0216 15:26:59.422878 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:00 crc kubenswrapper[4705]: I0216 15:27:00.440097 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf60aeda-83a7-4d56-95a6-c390c2d08b8a" path="/var/lib/kubelet/pods/bf60aeda-83a7-4d56-95a6-c390c2d08b8a/volumes" Feb 16 15:27:01 crc kubenswrapper[4705]: I0216 15:27:01.685106 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:27:01 crc kubenswrapper[4705]: I0216 15:27:01.685411 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:27:03 crc kubenswrapper[4705]: E0216 15:27:03.424662 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:10 crc kubenswrapper[4705]: E0216 15:27:10.423075 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:17 crc kubenswrapper[4705]: E0216 15:27:17.422292 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.046772 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.063442 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.071217 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v8zp2"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.081938 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c29kz"] Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.436929 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993" path="/var/lib/kubelet/pods/b1bcfa4d-34ea-4c18-96b0-d8d6bcc1a993/volumes" Feb 16 15:27:18 crc kubenswrapper[4705]: I0216 15:27:18.437979 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8" path="/var/lib/kubelet/pods/b275ddcd-6aeb-46e3-8bdb-ea5e01a0b8d8/volumes" Feb 16 15:27:21 crc kubenswrapper[4705]: E0216 15:27:21.422587 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:30 crc kubenswrapper[4705]: E0216 15:27:30.422554 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:31 crc kubenswrapper[4705]: I0216 15:27:31.686032 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:27:31 crc kubenswrapper[4705]: I0216 15:27:31.686473 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:27:33 crc kubenswrapper[4705]: I0216 15:27:33.969751 4705 scope.go:117] "RemoveContainer" containerID="014788fc35c94841b6f951360c014870b95d49ee1ef3f79b1ab6afab99936dbb" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.019730 4705 scope.go:117] "RemoveContainer" containerID="550b8aa10a670058b9e6ac10f7f37313d7d31e0cbd688f1364fdc7c57db609af" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.060729 4705 scope.go:117] "RemoveContainer" containerID="156bb556fedfb04698cb018e9e76e595a938f3b84761da0b56951eb757c0d725" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.135125 4705 scope.go:117] "RemoveContainer" containerID="5ae2ce7f764bba95fefdc2957453d34ae6c76d5367261ab8d7e532efc53c1306" Feb 16 15:27:34 crc kubenswrapper[4705]: I0216 15:27:34.191304 4705 scope.go:117] "RemoveContainer" containerID="c4e41dff555ca49ad18fee2a483f8d8d621a7c447a6cc4eeeab8d6ada480a2b5" Feb 16 15:27:36 crc kubenswrapper[4705]: E0216 15:27:36.433329 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:42 crc kubenswrapper[4705]: E0216 15:27:42.423462 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:27:50 crc kubenswrapper[4705]: E0216 15:27:50.423595 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:27:53 crc kubenswrapper[4705]: E0216 15:27:53.422352 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.684832 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.685526 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.685595 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.687317 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:28:01 crc kubenswrapper[4705]: I0216 15:28:01.687499 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" gracePeriod=600 Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.588679 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" exitCode=0 Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.588749 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a"} Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.589273 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} Feb 16 15:28:02 crc kubenswrapper[4705]: I0216 15:28:02.589296 4705 scope.go:117] "RemoveContainer" containerID="f87fdccbe082b0b86e7920388b5eae4d6c9c0337fece7d0d39e284cde79ccd29" Feb 16 15:28:03 crc kubenswrapper[4705]: I0216 15:28:03.050864 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:28:03 crc kubenswrapper[4705]: I0216 15:28:03.063777 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-v596j"] Feb 16 15:28:04 crc kubenswrapper[4705]: E0216 15:28:04.460323 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:04 crc kubenswrapper[4705]: I0216 15:28:04.471554 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d98759e-f50f-4b94-bd6a-8cfa1e083675" path="/var/lib/kubelet/pods/7d98759e-f50f-4b94-bd6a-8cfa1e083675/volumes" Feb 16 15:28:08 crc kubenswrapper[4705]: E0216 15:28:08.421804 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:16 crc kubenswrapper[4705]: E0216 15:28:16.446699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:19 crc kubenswrapper[4705]: E0216 15:28:19.423412 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.839301 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.843141 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.856300 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905415 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905480 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:27 crc kubenswrapper[4705]: I0216 15:28:27.905760 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010500 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010567 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.010613 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.011179 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.011184 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.048379 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"redhat-operators-ddjpg\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.177488 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:28 crc kubenswrapper[4705]: W0216 15:28:28.717817 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db7ee89_5367_4ead_bd1d_bcae066db67d.slice/crio-d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b WatchSource:0}: Error finding container d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b: Status 404 returned error can't find the container with id d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.727819 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.946472 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} Feb 16 15:28:28 crc kubenswrapper[4705]: I0216 15:28:28.946526 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b"} Feb 16 15:28:29 crc kubenswrapper[4705]: I0216 15:28:29.961666 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" exitCode=0 Feb 16 15:28:29 crc kubenswrapper[4705]: I0216 15:28:29.961733 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} Feb 16 15:28:30 crc kubenswrapper[4705]: E0216 15:28:30.422525 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:31 crc kubenswrapper[4705]: I0216 15:28:31.989209 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} Feb 16 15:28:33 crc kubenswrapper[4705]: E0216 15:28:33.420812 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:34 crc kubenswrapper[4705]: I0216 15:28:34.378425 4705 scope.go:117] "RemoveContainer" containerID="eee5c8bc6c54de4fa60aca953615e0f47f05dac72e43473a8138c9827fdeee6c" Feb 16 15:28:36 crc kubenswrapper[4705]: I0216 15:28:36.032236 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" exitCode=0 Feb 16 15:28:36 crc kubenswrapper[4705]: I0216 15:28:36.032318 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} Feb 16 15:28:37 crc kubenswrapper[4705]: I0216 15:28:37.052068 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerStarted","Data":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} Feb 16 15:28:37 crc kubenswrapper[4705]: I0216 15:28:37.082566 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddjpg" podStartSLOduration=3.462428564 podStartE2EDuration="10.082521911s" podCreationTimestamp="2026-02-16 15:28:27 +0000 UTC" firstStartedPulling="2026-02-16 15:28:29.965563957 +0000 UTC m=+2104.150541043" lastFinishedPulling="2026-02-16 15:28:36.585657314 +0000 UTC m=+2110.770634390" observedRunningTime="2026-02-16 15:28:37.071742987 +0000 UTC m=+2111.256720073" watchObservedRunningTime="2026-02-16 15:28:37.082521911 +0000 UTC m=+2111.267498987" Feb 16 15:28:38 crc kubenswrapper[4705]: I0216 15:28:38.178010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:38 crc kubenswrapper[4705]: I0216 15:28:38.178486 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:39 crc kubenswrapper[4705]: I0216 15:28:39.239977 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" probeResult="failure" output=< Feb 16 15:28:39 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:28:39 crc kubenswrapper[4705]: > Feb 16 15:28:43 crc kubenswrapper[4705]: E0216 15:28:43.423142 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:48 crc kubenswrapper[4705]: E0216 15:28:48.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:28:49 crc kubenswrapper[4705]: I0216 15:28:49.227955 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" probeResult="failure" output=< Feb 16 15:28:49 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:28:49 crc kubenswrapper[4705]: > Feb 16 15:28:54 crc kubenswrapper[4705]: E0216 15:28:54.422564 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:28:58 crc kubenswrapper[4705]: I0216 15:28:58.259748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:58 crc kubenswrapper[4705]: I0216 15:28:58.319183 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.051237 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.307642 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddjpg" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" containerID="cri-o://42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" gracePeriod=2 Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.802017 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840607 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840730 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.840876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") pod \"1db7ee89-5367-4ead-bd1d-bcae066db67d\" (UID: \"1db7ee89-5367-4ead-bd1d-bcae066db67d\") " Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.841494 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities" (OuterVolumeSpecName: "utilities") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.841768 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.873851 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq" (OuterVolumeSpecName: "kube-api-access-fclxq") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "kube-api-access-fclxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.944064 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclxq\" (UniqueName: \"kubernetes.io/projected/1db7ee89-5367-4ead-bd1d-bcae066db67d-kube-api-access-fclxq\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:59 crc kubenswrapper[4705]: I0216 15:28:59.986755 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1db7ee89-5367-4ead-bd1d-bcae066db67d" (UID: "1db7ee89-5367-4ead-bd1d-bcae066db67d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.046743 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1db7ee89-5367-4ead-bd1d-bcae066db67d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320116 4705 generic.go:334] "Generic (PLEG): container finished" podID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" exitCode=0 Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320175 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320212 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddjpg" event={"ID":"1db7ee89-5367-4ead-bd1d-bcae066db67d","Type":"ContainerDied","Data":"d61aa740f8f8942456ac52a4b287234ca3e8a429341ced94537e968b47236e9b"} Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320239 4705 scope.go:117] "RemoveContainer" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.320443 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddjpg" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.358151 4705 scope.go:117] "RemoveContainer" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.374738 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.382507 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddjpg"] Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.399123 4705 scope.go:117] "RemoveContainer" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.441740 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" path="/var/lib/kubelet/pods/1db7ee89-5367-4ead-bd1d-bcae066db67d/volumes" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443006 4705 scope.go:117] "RemoveContainer" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.443699 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": container with ID starting with 42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982 not found: ID does not exist" containerID="42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443747 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982"} err="failed to get container status \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": rpc error: code = NotFound desc = could not find container \"42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982\": container with ID starting with 42f46f557728d017372246a3ff0377530c2fd29e3c283b805593714b9f113982 not found: ID does not exist" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.443777 4705 scope.go:117] "RemoveContainer" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.444261 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": container with ID starting with c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363 not found: ID does not exist" containerID="c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.444331 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363"} err="failed to get container status \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": rpc error: code = NotFound desc = could not find container \"c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363\": container with ID starting with c056f6a1389c6eb03d4dc94f9a79ff07298d57cb44832831c6b53eaae7b95363 not found: ID does not exist" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.444404 4705 scope.go:117] "RemoveContainer" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: E0216 15:29:00.445084 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": container with ID starting with cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5 not found: ID does not exist" containerID="cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5" Feb 16 15:29:00 crc kubenswrapper[4705]: I0216 15:29:00.445255 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5"} err="failed to get container status \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": rpc error: code = NotFound desc = could not find container \"cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5\": container with ID starting with cbe43cf80427fef87cefe2e6949be1d782cc9e0dd6236ef599f8b04d94fad9b5 not found: ID does not exist" Feb 16 15:29:02 crc kubenswrapper[4705]: E0216 15:29:02.422145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:06 crc kubenswrapper[4705]: E0216 15:29:06.428465 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:17 crc kubenswrapper[4705]: E0216 15:29:17.423278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:20 crc kubenswrapper[4705]: E0216 15:29:20.421577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.248280 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249625 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-content" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249639 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-content" Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249658 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249665 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: E0216 15:29:25.249708 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-utilities" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.249716 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="extract-utilities" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.264365 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db7ee89-5367-4ead-bd1d-bcae066db67d" containerName="registry-server" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.266610 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.266712 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303692 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303801 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.303871 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406327 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406455 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.406610 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.407153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.407298 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.428450 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"certified-operators-qz8rs\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:25 crc kubenswrapper[4705]: I0216 15:29:25.593319 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.108560 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583446 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" exitCode=0 Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583538 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff"} Feb 16 15:29:26 crc kubenswrapper[4705]: I0216 15:29:26.583875 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"c1cd38255b851b38bfdf2fa0e752842971171b977b22335433117f4a4d1e8923"} Feb 16 15:29:27 crc kubenswrapper[4705]: I0216 15:29:27.597008 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} Feb 16 15:29:28 crc kubenswrapper[4705]: I0216 15:29:28.608324 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" exitCode=0 Feb 16 15:29:28 crc kubenswrapper[4705]: I0216 15:29:28.608405 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} Feb 16 15:29:29 crc kubenswrapper[4705]: I0216 15:29:29.620416 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerStarted","Data":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} Feb 16 15:29:29 crc kubenswrapper[4705]: I0216 15:29:29.652084 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qz8rs" podStartSLOduration=2.2214376590000002 podStartE2EDuration="4.652062237s" podCreationTimestamp="2026-02-16 15:29:25 +0000 UTC" firstStartedPulling="2026-02-16 15:29:26.587341988 +0000 UTC m=+2160.772319064" lastFinishedPulling="2026-02-16 15:29:29.017966566 +0000 UTC m=+2163.202943642" observedRunningTime="2026-02-16 15:29:29.639387439 +0000 UTC m=+2163.824364525" watchObservedRunningTime="2026-02-16 15:29:29.652062237 +0000 UTC m=+2163.837039303" Feb 16 15:29:31 crc kubenswrapper[4705]: E0216 15:29:31.421833 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:32 crc kubenswrapper[4705]: E0216 15:29:32.421453 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.623649 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.629497 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.639827 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710000 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710139 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.710192 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812521 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812639 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.812685 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.813500 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.813577 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.832394 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"community-operators-dmwcz\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:32 crc kubenswrapper[4705]: I0216 15:29:32.950679 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:33 crc kubenswrapper[4705]: I0216 15:29:33.472907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:33 crc kubenswrapper[4705]: I0216 15:29:33.666834 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"3ad7e362a5d5fec61f0b51b0a86fc6db1eddbaabfe14cce7548c481ee1985bf8"} Feb 16 15:29:34 crc kubenswrapper[4705]: I0216 15:29:34.679175 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" exitCode=0 Feb 16 15:29:34 crc kubenswrapper[4705]: I0216 15:29:34.679256 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd"} Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.593743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.593985 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.664785 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:35 crc kubenswrapper[4705]: I0216 15:29:35.771213 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:36 crc kubenswrapper[4705]: I0216 15:29:36.712501 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} Feb 16 15:29:36 crc kubenswrapper[4705]: I0216 15:29:36.810208 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721697 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" exitCode=0 Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721770 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} Feb 16 15:29:37 crc kubenswrapper[4705]: I0216 15:29:37.721900 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qz8rs" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" containerID="cri-o://a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" gracePeriod=2 Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.336431 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.464359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") pod \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\" (UID: \"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5\") " Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.465829 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities" (OuterVolumeSpecName: "utilities") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.466170 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.472898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn" (OuterVolumeSpecName: "kube-api-access-txfhn") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "kube-api-access-txfhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.531139 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" (UID: "17b4bff1-b94a-4dcb-a954-dbd14f32dfb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.568888 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.568925 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txfhn\" (UniqueName: \"kubernetes.io/projected/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5-kube-api-access-txfhn\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736137 4705 generic.go:334] "Generic (PLEG): container finished" podID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" exitCode=0 Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736304 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz8rs" event={"ID":"17b4bff1-b94a-4dcb-a954-dbd14f32dfb5","Type":"ContainerDied","Data":"c1cd38255b851b38bfdf2fa0e752842971171b977b22335433117f4a4d1e8923"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736329 4705 scope.go:117] "RemoveContainer" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.736259 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz8rs" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.741056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerStarted","Data":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.757692 4705 scope.go:117] "RemoveContainer" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.773458 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmwcz" podStartSLOduration=3.304821954 podStartE2EDuration="6.771496563s" podCreationTimestamp="2026-02-16 15:29:32 +0000 UTC" firstStartedPulling="2026-02-16 15:29:34.683864225 +0000 UTC m=+2168.868841301" lastFinishedPulling="2026-02-16 15:29:38.150538834 +0000 UTC m=+2172.335515910" observedRunningTime="2026-02-16 15:29:38.760879033 +0000 UTC m=+2172.945856119" watchObservedRunningTime="2026-02-16 15:29:38.771496563 +0000 UTC m=+2172.956473649" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.790770 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.797508 4705 scope.go:117] "RemoveContainer" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.801843 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qz8rs"] Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.855911 4705 scope.go:117] "RemoveContainer" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.856475 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": container with ID starting with a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b not found: ID does not exist" containerID="a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.856528 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b"} err="failed to get container status \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": rpc error: code = NotFound desc = could not find container \"a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b\": container with ID starting with a28aa7075ccd651e83ace6ac26ce0ee6c0d2fd29ab7e79ff0f3a7bd0a412b99b not found: ID does not exist" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.856565 4705 scope.go:117] "RemoveContainer" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.857063 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": container with ID starting with 28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011 not found: ID does not exist" containerID="28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.857099 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011"} err="failed to get container status \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": rpc error: code = NotFound desc = could not find container \"28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011\": container with ID starting with 28cb8d534fbbb1f329d5a1ed1da5aa54adfd67eb67989e5590ab0ef177dbb011 not found: ID does not exist" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.857128 4705 scope.go:117] "RemoveContainer" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: E0216 15:29:38.858831 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": container with ID starting with 98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff not found: ID does not exist" containerID="98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff" Feb 16 15:29:38 crc kubenswrapper[4705]: I0216 15:29:38.858864 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff"} err="failed to get container status \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": rpc error: code = NotFound desc = could not find container \"98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff\": container with ID starting with 98977d214066b66d6926b5b489199c7ee2309ae897df03bc581cedc115cbbaff not found: ID does not exist" Feb 16 15:29:40 crc kubenswrapper[4705]: I0216 15:29:40.438828 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" path="/var/lib/kubelet/pods/17b4bff1-b94a-4dcb-a954-dbd14f32dfb5/volumes" Feb 16 15:29:42 crc kubenswrapper[4705]: I0216 15:29:42.952738 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:42 crc kubenswrapper[4705]: I0216 15:29:42.953682 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:44 crc kubenswrapper[4705]: I0216 15:29:44.021215 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dmwcz" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" probeResult="failure" output=< Feb 16 15:29:44 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:29:44 crc kubenswrapper[4705]: > Feb 16 15:29:45 crc kubenswrapper[4705]: E0216 15:29:45.421573 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:45 crc kubenswrapper[4705]: E0216 15:29:45.421605 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.042586 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.104311 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:53 crc kubenswrapper[4705]: I0216 15:29:53.283608 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:54 crc kubenswrapper[4705]: I0216 15:29:54.982913 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmwcz" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" containerID="cri-o://886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" gracePeriod=2 Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.550102 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662227 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662725 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.662909 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") pod \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\" (UID: \"be21c4cc-f0fe-4e3e-aac6-1dabd8957912\") " Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.664518 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities" (OuterVolumeSpecName: "utilities") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.669915 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55" (OuterVolumeSpecName: "kube-api-access-rsn55") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "kube-api-access-rsn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.710748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be21c4cc-f0fe-4e3e-aac6-1dabd8957912" (UID: "be21c4cc-f0fe-4e3e-aac6-1dabd8957912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767083 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767145 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsn55\" (UniqueName: \"kubernetes.io/projected/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-kube-api-access-rsn55\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.767164 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be21c4cc-f0fe-4e3e-aac6-1dabd8957912-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994789 4705 generic.go:334] "Generic (PLEG): container finished" podID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" exitCode=0 Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994841 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994879 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmwcz" event={"ID":"be21c4cc-f0fe-4e3e-aac6-1dabd8957912","Type":"ContainerDied","Data":"3ad7e362a5d5fec61f0b51b0a86fc6db1eddbaabfe14cce7548c481ee1985bf8"} Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994898 4705 scope.go:117] "RemoveContainer" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:55 crc kubenswrapper[4705]: I0216 15:29:55.994919 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmwcz" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.018195 4705 scope.go:117] "RemoveContainer" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.049196 4705 scope.go:117] "RemoveContainer" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.056765 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.069106 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmwcz"] Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129145 4705 scope.go:117] "RemoveContainer" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.129826 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": container with ID starting with 886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d not found: ID does not exist" containerID="886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129869 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d"} err="failed to get container status \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": rpc error: code = NotFound desc = could not find container \"886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d\": container with ID starting with 886e98f330077ec2a5ae10540adbbb5bd7c4035ecb4be73bdc7f98578a2bf31d not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.129895 4705 scope.go:117] "RemoveContainer" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.130289 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": container with ID starting with d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1 not found: ID does not exist" containerID="d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130311 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1"} err="failed to get container status \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": rpc error: code = NotFound desc = could not find container \"d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1\": container with ID starting with d28de9e482981429eb6fd5e2985b573b04b68d47679961e838a13696cc3fd5f1 not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130324 4705 scope.go:117] "RemoveContainer" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.130679 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": container with ID starting with bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd not found: ID does not exist" containerID="bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.130706 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd"} err="failed to get container status \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": rpc error: code = NotFound desc = could not find container \"bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd\": container with ID starting with bedf0a7b12657cfaf8f24a74b557fea59fb013274b930cd728867f6b97810ddd not found: ID does not exist" Feb 16 15:29:56 crc kubenswrapper[4705]: E0216 15:29:56.428255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:29:56 crc kubenswrapper[4705]: I0216 15:29:56.431983 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" path="/var/lib/kubelet/pods/be21c4cc-f0fe-4e3e-aac6-1dabd8957912/volumes" Feb 16 15:29:57 crc kubenswrapper[4705]: E0216 15:29:57.421993 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.159160 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160212 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160227 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160246 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160252 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160275 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160282 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160297 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160303 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160331 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160337 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4705]: E0216 15:30:00.160350 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160356 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160573 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="be21c4cc-f0fe-4e3e-aac6-1dabd8957912" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.160612 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b4bff1-b94a-4dcb-a954-dbd14f32dfb5" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.161618 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.163797 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.164204 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.173252 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.304867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.305006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.305071 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408055 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408174 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.408425 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.409340 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.425872 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.439395 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"collect-profiles-29520930-xzxs4\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:00 crc kubenswrapper[4705]: I0216 15:30:00.660474 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:01 crc kubenswrapper[4705]: I0216 15:30:01.172015 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065125 4705 generic.go:334] "Generic (PLEG): container finished" podID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerID="3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b" exitCode=0 Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerDied","Data":"3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b"} Feb 16 15:30:02 crc kubenswrapper[4705]: I0216 15:30:02.065817 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerStarted","Data":"28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca"} Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.510664 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644117 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644353 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.644411 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") pod \"d7a4c227-649b-4c63-a135-9e62204fb5e6\" (UID: \"d7a4c227-649b-4c63-a135-9e62204fb5e6\") " Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.645351 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.650697 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69" (OuterVolumeSpecName: "kube-api-access-mrv69") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "kube-api-access-mrv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.650888 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7a4c227-649b-4c63-a135-9e62204fb5e6" (UID: "d7a4c227-649b-4c63-a135-9e62204fb5e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748110 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a4c227-649b-4c63-a135-9e62204fb5e6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748284 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a4c227-649b-4c63-a135-9e62204fb5e6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4705]: I0216 15:30:03.748306 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrv69\" (UniqueName: \"kubernetes.io/projected/d7a4c227-649b-4c63-a135-9e62204fb5e6-kube-api-access-mrv69\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" event={"ID":"d7a4c227-649b-4c63-a135-9e62204fb5e6","Type":"ContainerDied","Data":"28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca"} Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097825 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e74c4e86789fb4ef2937dd57dc3f07315abf5470a68284b6cdc7061d0690ca" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.097876 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4" Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.606246 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 15:30:04 crc kubenswrapper[4705]: I0216 15:30:04.622141 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-h8s9q"] Feb 16 15:30:06 crc kubenswrapper[4705]: I0216 15:30:06.432849 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc25ae00-316a-4dfb-8a83-72fe2318da5e" path="/var/lib/kubelet/pods/fc25ae00-316a-4dfb-8a83-72fe2318da5e/volumes" Feb 16 15:30:08 crc kubenswrapper[4705]: E0216 15:30:08.421353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:11 crc kubenswrapper[4705]: I0216 15:30:11.423544 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559239 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559605 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.559749 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:30:11 crc kubenswrapper[4705]: E0216 15:30:11.561035 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.525600 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.527160 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.527357 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:30:19 crc kubenswrapper[4705]: E0216 15:30:19.528880 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:25 crc kubenswrapper[4705]: E0216 15:30:25.424043 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:31 crc kubenswrapper[4705]: E0216 15:30:31.425184 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:31 crc kubenswrapper[4705]: I0216 15:30:31.684478 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:30:31 crc kubenswrapper[4705]: I0216 15:30:31.684560 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:30:32 crc kubenswrapper[4705]: I0216 15:30:32.437531 4705 generic.go:334] "Generic (PLEG): container finished" podID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerID="28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba" exitCode=2 Feb 16 15:30:32 crc kubenswrapper[4705]: I0216 15:30:32.441730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerDied","Data":"28b8a03511de9f268771916995dae0e764844fbb28d7392f4eab5fc6742c96ba"} Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.036594 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.129860 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.129954 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.130203 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") pod \"447b9ab7-d583-4e71-8eca-fb352e541b13\" (UID: \"447b9ab7-d583-4e71-8eca-fb352e541b13\") " Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.139096 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82" (OuterVolumeSpecName: "kube-api-access-vvj82") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "kube-api-access-vvj82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.180585 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory" (OuterVolumeSpecName: "inventory") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.202957 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "447b9ab7-d583-4e71-8eca-fb352e541b13" (UID: "447b9ab7-d583-4e71-8eca-fb352e541b13"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235051 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235109 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvj82\" (UniqueName: \"kubernetes.io/projected/447b9ab7-d583-4e71-8eca-fb352e541b13-kube-api-access-vvj82\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.235132 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/447b9ab7-d583-4e71-8eca-fb352e541b13-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" event={"ID":"447b9ab7-d583-4e71-8eca-fb352e541b13","Type":"ContainerDied","Data":"1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b"} Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476162 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1755306531a2954e5ed18a62c5063702c29a4b80eca86c2194ea2e1192d5af0b" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.476295 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-drn5g" Feb 16 15:30:34 crc kubenswrapper[4705]: I0216 15:30:34.523444 4705 scope.go:117] "RemoveContainer" containerID="5fa9675e76e9d05c53516ed8415decce4c44f3785514ae5a86a5062278da9f97" Feb 16 15:30:40 crc kubenswrapper[4705]: E0216 15:30:40.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.050608 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:42 crc kubenswrapper[4705]: E0216 15:30:42.052306 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.052343 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: E0216 15:30:42.052448 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.052473 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.053211 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="447b9ab7-d583-4e71-8eca-fb352e541b13" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.053287 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" containerName="collect-profiles" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.056825 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.061587 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.062421 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.064526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.065195 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.065679 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204511 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.204582 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.307762 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.316153 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.316669 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.326691 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:42 crc kubenswrapper[4705]: I0216 15:30:42.390795 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:30:43 crc kubenswrapper[4705]: I0216 15:30:43.064658 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j"] Feb 16 15:30:43 crc kubenswrapper[4705]: I0216 15:30:43.614471 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerStarted","Data":"3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89"} Feb 16 15:30:44 crc kubenswrapper[4705]: I0216 15:30:44.630084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerStarted","Data":"1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae"} Feb 16 15:30:44 crc kubenswrapper[4705]: I0216 15:30:44.665910 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" podStartSLOduration=2.168536301 podStartE2EDuration="2.665888133s" podCreationTimestamp="2026-02-16 15:30:42 +0000 UTC" firstStartedPulling="2026-02-16 15:30:43.079723391 +0000 UTC m=+2237.264700467" lastFinishedPulling="2026-02-16 15:30:43.577075223 +0000 UTC m=+2237.762052299" observedRunningTime="2026-02-16 15:30:44.653342409 +0000 UTC m=+2238.838319485" watchObservedRunningTime="2026-02-16 15:30:44.665888133 +0000 UTC m=+2238.850865209" Feb 16 15:30:45 crc kubenswrapper[4705]: E0216 15:30:45.423529 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:30:52 crc kubenswrapper[4705]: E0216 15:30:52.423706 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:30:57 crc kubenswrapper[4705]: E0216 15:30:57.423555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:01 crc kubenswrapper[4705]: I0216 15:31:01.684719 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:31:01 crc kubenswrapper[4705]: I0216 15:31:01.685454 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:31:04 crc kubenswrapper[4705]: E0216 15:31:04.423046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:12 crc kubenswrapper[4705]: E0216 15:31:12.423353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:15 crc kubenswrapper[4705]: E0216 15:31:15.423437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:24 crc kubenswrapper[4705]: E0216 15:31:24.423840 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:27 crc kubenswrapper[4705]: E0216 15:31:27.424423 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.788690 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.793128 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.801670 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926688 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926739 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:28 crc kubenswrapper[4705]: I0216 15:31:28.926924 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.030882 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031363 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031722 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031494 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.031743 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.064700 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"redhat-marketplace-z9dwp\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.138691 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:29 crc kubenswrapper[4705]: I0216 15:31:29.745597 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437200 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" exitCode=0 Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437253 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69"} Feb 16 15:31:30 crc kubenswrapper[4705]: I0216 15:31:30.437707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"4bf4106a7d3133a69edfc0af3627e17b7f3a8e4a9a69e05595b74dffae5ac445"} Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.451934 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.685597 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.686096 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.686385 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.687565 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:31:31 crc kubenswrapper[4705]: I0216 15:31:31.687706 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" gracePeriod=600 Feb 16 15:31:31 crc kubenswrapper[4705]: E0216 15:31:31.842061 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.477186 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" exitCode=0 Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.477307 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.486936 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" exitCode=0 Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.487006 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b"} Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.487043 4705 scope.go:117] "RemoveContainer" containerID="7f4db11e79090b84e2ad677e629027370d9c3ded7d98a18a3a8340dd55dee54a" Feb 16 15:31:32 crc kubenswrapper[4705]: I0216 15:31:32.491993 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:32 crc kubenswrapper[4705]: E0216 15:31:32.494937 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:33 crc kubenswrapper[4705]: I0216 15:31:33.502546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerStarted","Data":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} Feb 16 15:31:33 crc kubenswrapper[4705]: I0216 15:31:33.527749 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9dwp" podStartSLOduration=3.058329383 podStartE2EDuration="5.527718064s" podCreationTimestamp="2026-02-16 15:31:28 +0000 UTC" firstStartedPulling="2026-02-16 15:31:30.441598804 +0000 UTC m=+2284.626575880" lastFinishedPulling="2026-02-16 15:31:32.910987465 +0000 UTC m=+2287.095964561" observedRunningTime="2026-02-16 15:31:33.526383866 +0000 UTC m=+2287.711360942" watchObservedRunningTime="2026-02-16 15:31:33.527718064 +0000 UTC m=+2287.712695140" Feb 16 15:31:35 crc kubenswrapper[4705]: E0216 15:31:35.422848 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.139226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.140906 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.226303 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.639254 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:39 crc kubenswrapper[4705]: I0216 15:31:39.707623 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:41 crc kubenswrapper[4705]: E0216 15:31:41.424065 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:41 crc kubenswrapper[4705]: I0216 15:31:41.590137 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9dwp" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" containerID="cri-o://081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" gracePeriod=2 Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.188654 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.378505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.378990 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.379086 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") pod \"65b54e01-a38c-4506-ae81-64e233cb63d8\" (UID: \"65b54e01-a38c-4506-ae81-64e233cb63d8\") " Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.380564 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities" (OuterVolumeSpecName: "utilities") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.389078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2" (OuterVolumeSpecName: "kube-api-access-lxgx2") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "kube-api-access-lxgx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.416078 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65b54e01-a38c-4506-ae81-64e233cb63d8" (UID: "65b54e01-a38c-4506-ae81-64e233cb63d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486856 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486904 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b54e01-a38c-4506-ae81-64e233cb63d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.486921 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgx2\" (UniqueName: \"kubernetes.io/projected/65b54e01-a38c-4506-ae81-64e233cb63d8-kube-api-access-lxgx2\") on node \"crc\" DevicePath \"\"" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610635 4705 generic.go:334] "Generic (PLEG): container finished" podID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" exitCode=0 Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610686 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610723 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dwp" event={"ID":"65b54e01-a38c-4506-ae81-64e233cb63d8","Type":"ContainerDied","Data":"4bf4106a7d3133a69edfc0af3627e17b7f3a8e4a9a69e05595b74dffae5ac445"} Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610749 4705 scope.go:117] "RemoveContainer" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.610913 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dwp" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.647774 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.678485 4705 scope.go:117] "RemoveContainer" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.681878 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dwp"] Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.715015 4705 scope.go:117] "RemoveContainer" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.769881 4705 scope.go:117] "RemoveContainer" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771267 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": container with ID starting with 081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3 not found: ID does not exist" containerID="081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771298 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3"} err="failed to get container status \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": rpc error: code = NotFound desc = could not find container \"081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3\": container with ID starting with 081c428b88c986b139bbc0e92df0e14af803a7efc5eaf17dacdf50e8859daeb3 not found: ID does not exist" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771320 4705 scope.go:117] "RemoveContainer" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771634 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": container with ID starting with c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588 not found: ID does not exist" containerID="c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771658 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588"} err="failed to get container status \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": rpc error: code = NotFound desc = could not find container \"c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588\": container with ID starting with c73bf44dfc7bf34e5cade3f4589c3e33da8cda76a9312e7e4dbf40ed03173588 not found: ID does not exist" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771671 4705 scope.go:117] "RemoveContainer" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: E0216 15:31:42.771856 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": container with ID starting with 3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69 not found: ID does not exist" containerID="3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69" Feb 16 15:31:42 crc kubenswrapper[4705]: I0216 15:31:42.771875 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69"} err="failed to get container status \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": rpc error: code = NotFound desc = could not find container \"3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69\": container with ID starting with 3da132ea499e7d7e76c760c8435ce2c892e0483f5fab228899c0ac36a4fbaa69 not found: ID does not exist" Feb 16 15:31:44 crc kubenswrapper[4705]: I0216 15:31:44.444653 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" path="/var/lib/kubelet/pods/65b54e01-a38c-4506-ae81-64e233cb63d8/volumes" Feb 16 15:31:45 crc kubenswrapper[4705]: I0216 15:31:45.437182 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:45 crc kubenswrapper[4705]: E0216 15:31:45.439177 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:31:48 crc kubenswrapper[4705]: E0216 15:31:48.425460 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:31:53 crc kubenswrapper[4705]: E0216 15:31:53.430666 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:31:59 crc kubenswrapper[4705]: I0216 15:31:59.421352 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:31:59 crc kubenswrapper[4705]: E0216 15:31:59.423314 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:03 crc kubenswrapper[4705]: E0216 15:32:03.423478 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:04 crc kubenswrapper[4705]: E0216 15:32:04.420707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:12 crc kubenswrapper[4705]: I0216 15:32:12.420541 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:12 crc kubenswrapper[4705]: E0216 15:32:12.422015 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:18 crc kubenswrapper[4705]: E0216 15:32:18.423713 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:18 crc kubenswrapper[4705]: E0216 15:32:18.427685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:23 crc kubenswrapper[4705]: I0216 15:32:23.420125 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:23 crc kubenswrapper[4705]: E0216 15:32:23.421022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:29 crc kubenswrapper[4705]: E0216 15:32:29.422816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:29 crc kubenswrapper[4705]: E0216 15:32:29.423006 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:34 crc kubenswrapper[4705]: I0216 15:32:34.420110 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:34 crc kubenswrapper[4705]: E0216 15:32:34.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:42 crc kubenswrapper[4705]: E0216 15:32:42.429077 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:44 crc kubenswrapper[4705]: E0216 15:32:44.423257 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:32:47 crc kubenswrapper[4705]: I0216 15:32:47.420267 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:32:47 crc kubenswrapper[4705]: E0216 15:32:47.420729 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:32:54 crc kubenswrapper[4705]: E0216 15:32:54.424632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:32:57 crc kubenswrapper[4705]: E0216 15:32:57.424149 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:02 crc kubenswrapper[4705]: I0216 15:33:01.419824 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:02 crc kubenswrapper[4705]: E0216 15:33:01.420912 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:08 crc kubenswrapper[4705]: E0216 15:33:08.425279 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:10 crc kubenswrapper[4705]: E0216 15:33:10.422332 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:16 crc kubenswrapper[4705]: I0216 15:33:16.434498 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:16 crc kubenswrapper[4705]: E0216 15:33:16.435868 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:21 crc kubenswrapper[4705]: E0216 15:33:21.422534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:22 crc kubenswrapper[4705]: E0216 15:33:22.421900 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:28 crc kubenswrapper[4705]: I0216 15:33:28.420023 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:28 crc kubenswrapper[4705]: E0216 15:33:28.421070 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:35 crc kubenswrapper[4705]: E0216 15:33:35.424577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:36 crc kubenswrapper[4705]: E0216 15:33:36.437994 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:43 crc kubenswrapper[4705]: I0216 15:33:43.421326 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:43 crc kubenswrapper[4705]: E0216 15:33:43.422669 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:33:48 crc kubenswrapper[4705]: E0216 15:33:48.427640 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:33:49 crc kubenswrapper[4705]: E0216 15:33:49.422971 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:33:57 crc kubenswrapper[4705]: I0216 15:33:57.424319 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:33:57 crc kubenswrapper[4705]: E0216 15:33:57.426833 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:01 crc kubenswrapper[4705]: E0216 15:34:01.425111 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:01 crc kubenswrapper[4705]: E0216 15:34:01.425339 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:10 crc kubenswrapper[4705]: I0216 15:34:10.421155 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:10 crc kubenswrapper[4705]: E0216 15:34:10.422281 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:14 crc kubenswrapper[4705]: E0216 15:34:14.423806 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:16 crc kubenswrapper[4705]: E0216 15:34:16.437053 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:21 crc kubenswrapper[4705]: I0216 15:34:21.420037 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:21 crc kubenswrapper[4705]: E0216 15:34:21.420922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:29 crc kubenswrapper[4705]: E0216 15:34:29.422842 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:29 crc kubenswrapper[4705]: E0216 15:34:29.422889 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:34 crc kubenswrapper[4705]: I0216 15:34:34.420671 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:34 crc kubenswrapper[4705]: E0216 15:34:34.421669 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:40 crc kubenswrapper[4705]: E0216 15:34:40.455016 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:41 crc kubenswrapper[4705]: E0216 15:34:41.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:34:48 crc kubenswrapper[4705]: I0216 15:34:48.423302 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:34:48 crc kubenswrapper[4705]: E0216 15:34:48.424441 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:34:51 crc kubenswrapper[4705]: E0216 15:34:51.422826 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:34:56 crc kubenswrapper[4705]: E0216 15:34:56.423510 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:03 crc kubenswrapper[4705]: I0216 15:35:03.424898 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:03 crc kubenswrapper[4705]: E0216 15:35:03.425851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:05 crc kubenswrapper[4705]: E0216 15:35:05.422915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:09 crc kubenswrapper[4705]: E0216 15:35:09.422802 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:16 crc kubenswrapper[4705]: I0216 15:35:16.435897 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:16 crc kubenswrapper[4705]: E0216 15:35:16.437323 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:18 crc kubenswrapper[4705]: E0216 15:35:18.423261 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:20 crc kubenswrapper[4705]: I0216 15:35:20.423013 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566477 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566567 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.566733 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:35:20 crc kubenswrapper[4705]: E0216 15:35:20.567948 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:27 crc kubenswrapper[4705]: I0216 15:35:27.419788 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:27 crc kubenswrapper[4705]: E0216 15:35:27.421409 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.559955 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.560596 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.560822 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:35:32 crc kubenswrapper[4705]: E0216 15:35:32.561974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:35 crc kubenswrapper[4705]: E0216 15:35:35.422488 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:42 crc kubenswrapper[4705]: I0216 15:35:42.421038 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:42 crc kubenswrapper[4705]: E0216 15:35:42.422495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:45 crc kubenswrapper[4705]: E0216 15:35:45.424106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:35:46 crc kubenswrapper[4705]: E0216 15:35:46.447508 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:35:57 crc kubenswrapper[4705]: I0216 15:35:57.421350 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:35:57 crc kubenswrapper[4705]: E0216 15:35:57.422998 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:35:57 crc kubenswrapper[4705]: E0216 15:35:57.426665 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:00 crc kubenswrapper[4705]: E0216 15:36:00.436212 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:08 crc kubenswrapper[4705]: E0216 15:36:08.423862 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:11 crc kubenswrapper[4705]: I0216 15:36:11.419902 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:11 crc kubenswrapper[4705]: E0216 15:36:11.420775 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:36:15 crc kubenswrapper[4705]: E0216 15:36:15.424036 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:21 crc kubenswrapper[4705]: E0216 15:36:21.424549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:24 crc kubenswrapper[4705]: I0216 15:36:24.419755 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:24 crc kubenswrapper[4705]: E0216 15:36:24.420421 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:36:30 crc kubenswrapper[4705]: E0216 15:36:30.424399 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:34 crc kubenswrapper[4705]: E0216 15:36:34.425167 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:36 crc kubenswrapper[4705]: I0216 15:36:36.432492 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:36:37 crc kubenswrapper[4705]: I0216 15:36:37.602851 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} Feb 16 15:36:42 crc kubenswrapper[4705]: E0216 15:36:42.426555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:47 crc kubenswrapper[4705]: E0216 15:36:47.426362 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:36:57 crc kubenswrapper[4705]: E0216 15:36:57.422981 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:36:58 crc kubenswrapper[4705]: E0216 15:36:58.424767 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:02 crc kubenswrapper[4705]: I0216 15:37:02.905429 4705 generic.go:334] "Generic (PLEG): container finished" podID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerID="1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae" exitCode=2 Feb 16 15:37:02 crc kubenswrapper[4705]: I0216 15:37:02.905525 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerDied","Data":"1ff3584c7989d92952bba73c1070e5f2b6b7dabc78a615853d31c0087a4a94ae"} Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.487074 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.559981 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.560817 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.561068 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") pod \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\" (UID: \"0b4f3354-7fb7-4031-9c17-270d82f9ece1\") " Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.574828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7" (OuterVolumeSpecName: "kube-api-access-m4nb7") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "kube-api-access-m4nb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.599292 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.612720 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory" (OuterVolumeSpecName: "inventory") pod "0b4f3354-7fb7-4031-9c17-270d82f9ece1" (UID: "0b4f3354-7fb7-4031-9c17-270d82f9ece1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664419 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664696 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b4f3354-7fb7-4031-9c17-270d82f9ece1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.664769 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4nb7\" (UniqueName: \"kubernetes.io/projected/0b4f3354-7fb7-4031-9c17-270d82f9ece1-kube-api-access-m4nb7\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926015 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" event={"ID":"0b4f3354-7fb7-4031-9c17-270d82f9ece1","Type":"ContainerDied","Data":"3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89"} Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926483 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3248cb9fde276a55af37987d67a39cc404620b3d9acb9b5859deab0a32d27f89" Feb 16 15:37:04 crc kubenswrapper[4705]: I0216 15:37:04.926073 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j" Feb 16 15:37:11 crc kubenswrapper[4705]: E0216 15:37:11.422816 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:12 crc kubenswrapper[4705]: E0216 15:37:12.421757 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.043119 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044929 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.044952 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044966 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-utilities" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.044974 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-utilities" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.044997 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045005 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: E0216 15:37:22.045022 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-content" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045029 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="extract-content" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045386 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b54e01-a38c-4506-ae81-64e233cb63d8" containerName="registry-server" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.045419 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4f3354-7fb7-4031-9c17-270d82f9ece1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.046702 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052783 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052835 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.052779 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.059017 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.071997 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182292 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182424 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.182469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285579 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285657 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.285684 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.293002 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.305148 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.307896 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:22 crc kubenswrapper[4705]: I0216 15:37:22.385751 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:37:23 crc kubenswrapper[4705]: I0216 15:37:23.109986 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx"] Feb 16 15:37:23 crc kubenswrapper[4705]: I0216 15:37:23.181751 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerStarted","Data":"c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615"} Feb 16 15:37:24 crc kubenswrapper[4705]: I0216 15:37:24.198462 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerStarted","Data":"339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac"} Feb 16 15:37:24 crc kubenswrapper[4705]: I0216 15:37:24.236014 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" podStartSLOduration=1.7886381180000002 podStartE2EDuration="2.235987229s" podCreationTimestamp="2026-02-16 15:37:22 +0000 UTC" firstStartedPulling="2026-02-16 15:37:23.113938534 +0000 UTC m=+2637.298915600" lastFinishedPulling="2026-02-16 15:37:23.561287635 +0000 UTC m=+2637.746264711" observedRunningTime="2026-02-16 15:37:24.222747315 +0000 UTC m=+2638.407724421" watchObservedRunningTime="2026-02-16 15:37:24.235987229 +0000 UTC m=+2638.420964315" Feb 16 15:37:26 crc kubenswrapper[4705]: E0216 15:37:26.428892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:27 crc kubenswrapper[4705]: E0216 15:37:27.422160 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:38 crc kubenswrapper[4705]: E0216 15:37:38.423740 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:42 crc kubenswrapper[4705]: E0216 15:37:42.422493 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:37:53 crc kubenswrapper[4705]: E0216 15:37:53.424667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:37:55 crc kubenswrapper[4705]: E0216 15:37:55.422580 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:06 crc kubenswrapper[4705]: E0216 15:38:06.431214 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:06 crc kubenswrapper[4705]: E0216 15:38:06.431345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:17 crc kubenswrapper[4705]: E0216 15:38:17.421990 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:17 crc kubenswrapper[4705]: E0216 15:38:17.422066 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:29 crc kubenswrapper[4705]: E0216 15:38:29.422545 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:31 crc kubenswrapper[4705]: E0216 15:38:31.421660 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:42 crc kubenswrapper[4705]: E0216 15:38:42.422593 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:44 crc kubenswrapper[4705]: E0216 15:38:44.421268 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:38:53 crc kubenswrapper[4705]: E0216 15:38:53.423987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:38:56 crc kubenswrapper[4705]: E0216 15:38:56.428678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:01 crc kubenswrapper[4705]: I0216 15:39:01.685717 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:39:01 crc kubenswrapper[4705]: I0216 15:39:01.686726 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:39:08 crc kubenswrapper[4705]: E0216 15:39:08.423742 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:08 crc kubenswrapper[4705]: E0216 15:39:08.423892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:20 crc kubenswrapper[4705]: E0216 15:39:20.422305 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:21 crc kubenswrapper[4705]: E0216 15:39:21.422979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:31 crc kubenswrapper[4705]: I0216 15:39:31.684340 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:39:31 crc kubenswrapper[4705]: I0216 15:39:31.685415 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:39:32 crc kubenswrapper[4705]: E0216 15:39:32.422928 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:32 crc kubenswrapper[4705]: E0216 15:39:32.423548 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:45 crc kubenswrapper[4705]: E0216 15:39:45.424617 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:45 crc kubenswrapper[4705]: E0216 15:39:45.425701 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:39:56 crc kubenswrapper[4705]: E0216 15:39:56.431046 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:39:57 crc kubenswrapper[4705]: E0216 15:39:57.423894 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.683884 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.684559 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.684614 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.685693 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:40:01 crc kubenswrapper[4705]: I0216 15:40:01.685760 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" gracePeriod=600 Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137349 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" exitCode=0 Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6"} Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137873 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} Feb 16 15:40:02 crc kubenswrapper[4705]: I0216 15:40:02.137902 4705 scope.go:117] "RemoveContainer" containerID="c2c76818e3861f03abca61b74046846474bf6847233719037e843472acd6482b" Feb 16 15:40:07 crc kubenswrapper[4705]: E0216 15:40:07.421873 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:08 crc kubenswrapper[4705]: E0216 15:40:08.432812 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:18 crc kubenswrapper[4705]: E0216 15:40:18.423577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:22 crc kubenswrapper[4705]: E0216 15:40:22.425071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.745209 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.762651 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.762816 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.832990 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.833151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.833204 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.943544 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.944128 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.947487 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.948219 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.948909 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:22 crc kubenswrapper[4705]: I0216 15:40:22.978460 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"redhat-operators-g4ngb\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:23 crc kubenswrapper[4705]: I0216 15:40:23.097763 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:23.671091 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.473192 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e" exitCode=0 Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.473629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e"} Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.474188 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"eda671dae6a4b001a13bb9df0f6a3c3fc919f1941fb2808a4b7428464c673a61"} Feb 16 15:40:24 crc kubenswrapper[4705]: I0216 15:40:24.478097 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:40:26 crc kubenswrapper[4705]: I0216 15:40:26.511244 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd"} Feb 16 15:40:29 crc kubenswrapper[4705]: E0216 15:40:29.923940 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf237e260_e672_4b6e_8c0d_1fea39f1724f.slice/crio-898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:40:30 crc kubenswrapper[4705]: I0216 15:40:30.580747 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd" exitCode=0 Feb 16 15:40:30 crc kubenswrapper[4705]: I0216 15:40:30.580880 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd"} Feb 16 15:40:31 crc kubenswrapper[4705]: I0216 15:40:31.600229 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerStarted","Data":"176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3"} Feb 16 15:40:31 crc kubenswrapper[4705]: I0216 15:40:31.644507 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4ngb" podStartSLOduration=3.070855288 podStartE2EDuration="9.644467021s" podCreationTimestamp="2026-02-16 15:40:22 +0000 UTC" firstStartedPulling="2026-02-16 15:40:24.477777506 +0000 UTC m=+2818.662754592" lastFinishedPulling="2026-02-16 15:40:31.051389239 +0000 UTC m=+2825.236366325" observedRunningTime="2026-02-16 15:40:31.625209007 +0000 UTC m=+2825.810186103" watchObservedRunningTime="2026-02-16 15:40:31.644467021 +0000 UTC m=+2825.829444127" Feb 16 15:40:33 crc kubenswrapper[4705]: I0216 15:40:33.098742 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:33 crc kubenswrapper[4705]: I0216 15:40:33.098838 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553227 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553722 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.553862 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:40:33 crc kubenswrapper[4705]: E0216 15:40:33.555386 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:34 crc kubenswrapper[4705]: I0216 15:40:34.148971 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4ngb" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" probeResult="failure" output=< Feb 16 15:40:34 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:40:34 crc kubenswrapper[4705]: > Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.549613 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.550633 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.550838 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:40:36 crc kubenswrapper[4705]: E0216 15:40:36.552189 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.437171 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.440239 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.440341 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.566034 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.566593 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.568661 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.670789 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.670925 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671091 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671502 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.671741 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.694461 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"certified-operators-5x5h7\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:42 crc kubenswrapper[4705]: I0216 15:40:42.773129 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.204188 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.271443 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.400435 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734335 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" exitCode=0 Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734423 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb"} Feb 16 15:40:43 crc kubenswrapper[4705]: I0216 15:40:43.734918 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"d54a0018ea82a8a39b4fd22b98aae1c3a3f867a3ad7bbd769da6bc2503e4a5b6"} Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.581798 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.582760 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4ngb" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" containerID="cri-o://176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" gracePeriod=2 Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.760644 4705 generic.go:334] "Generic (PLEG): container finished" podID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerID="176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" exitCode=0 Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.760730 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3"} Feb 16 15:40:45 crc kubenswrapper[4705]: I0216 15:40:45.763590 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.161736 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202490 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202624 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.202717 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") pod \"f237e260-e672-4b6e-8c0d-1fea39f1724f\" (UID: \"f237e260-e672-4b6e-8c0d-1fea39f1724f\") " Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.204241 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities" (OuterVolumeSpecName: "utilities") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.212106 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt" (OuterVolumeSpecName: "kube-api-access-dtxzt") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "kube-api-access-dtxzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.306646 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxzt\" (UniqueName: \"kubernetes.io/projected/f237e260-e672-4b6e-8c0d-1fea39f1724f-kube-api-access-dtxzt\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.306695 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.329998 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f237e260-e672-4b6e-8c0d-1fea39f1724f" (UID: "f237e260-e672-4b6e-8c0d-1fea39f1724f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.409449 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f237e260-e672-4b6e-8c0d-1fea39f1724f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:46 crc kubenswrapper[4705]: E0216 15:40:46.422324 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.778585 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" exitCode=0 Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.778773 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793705 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4ngb" event={"ID":"f237e260-e672-4b6e-8c0d-1fea39f1724f","Type":"ContainerDied","Data":"eda671dae6a4b001a13bb9df0f6a3c3fc919f1941fb2808a4b7428464c673a61"} Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793772 4705 scope.go:117] "RemoveContainer" containerID="176f3bcc29eba171ebea3b9c928d5cced7ff4fa54694dadea216bf7a49216ba3" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.793808 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4ngb" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.840143 4705 scope.go:117] "RemoveContainer" containerID="898149154c361e3a76fb9e90962b2c13a71ae4a13729f1aded7e5f1c72a1bcfd" Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.842971 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.864763 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4ngb"] Feb 16 15:40:46 crc kubenswrapper[4705]: I0216 15:40:46.869934 4705 scope.go:117] "RemoveContainer" containerID="219468d8899b7955d3e9b9a231d29a968f0060c5e43d73eaf27c9242987b442e" Feb 16 15:40:47 crc kubenswrapper[4705]: I0216 15:40:47.809887 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerStarted","Data":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} Feb 16 15:40:47 crc kubenswrapper[4705]: I0216 15:40:47.841866 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5x5h7" podStartSLOduration=2.332956911 podStartE2EDuration="5.841838792s" podCreationTimestamp="2026-02-16 15:40:42 +0000 UTC" firstStartedPulling="2026-02-16 15:40:43.738222532 +0000 UTC m=+2837.923199618" lastFinishedPulling="2026-02-16 15:40:47.247104413 +0000 UTC m=+2841.432081499" observedRunningTime="2026-02-16 15:40:47.829321759 +0000 UTC m=+2842.014298835" watchObservedRunningTime="2026-02-16 15:40:47.841838792 +0000 UTC m=+2842.026815868" Feb 16 15:40:48 crc kubenswrapper[4705]: I0216 15:40:48.434198 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" path="/var/lib/kubelet/pods/f237e260-e672-4b6e-8c0d-1fea39f1724f/volumes" Feb 16 15:40:51 crc kubenswrapper[4705]: E0216 15:40:51.423906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.774843 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.775643 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.852819 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:52 crc kubenswrapper[4705]: I0216 15:40:52.943521 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:53 crc kubenswrapper[4705]: I0216 15:40:53.995333 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:54 crc kubenswrapper[4705]: I0216 15:40:54.900183 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5x5h7" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" containerID="cri-o://5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" gracePeriod=2 Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.469298 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.585891 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.585976 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.586095 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") pod \"39635490-f866-4108-9281-6105560b35a2\" (UID: \"39635490-f866-4108-9281-6105560b35a2\") " Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.587056 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities" (OuterVolumeSpecName: "utilities") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.587831 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.592646 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv" (OuterVolumeSpecName: "kube-api-access-548bv") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "kube-api-access-548bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.662943 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39635490-f866-4108-9281-6105560b35a2" (UID: "39635490-f866-4108-9281-6105560b35a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.690801 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39635490-f866-4108-9281-6105560b35a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.690883 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-548bv\" (UniqueName: \"kubernetes.io/projected/39635490-f866-4108-9281-6105560b35a2-kube-api-access-548bv\") on node \"crc\" DevicePath \"\"" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912155 4705 generic.go:334] "Generic (PLEG): container finished" podID="39635490-f866-4108-9281-6105560b35a2" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" exitCode=0 Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912211 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912297 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x5h7" event={"ID":"39635490-f866-4108-9281-6105560b35a2","Type":"ContainerDied","Data":"d54a0018ea82a8a39b4fd22b98aae1c3a3f867a3ad7bbd769da6bc2503e4a5b6"} Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912322 4705 scope.go:117] "RemoveContainer" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.912316 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x5h7" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.937816 4705 scope.go:117] "RemoveContainer" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.977751 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.977812 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5x5h7"] Feb 16 15:40:55 crc kubenswrapper[4705]: I0216 15:40:55.985325 4705 scope.go:117] "RemoveContainer" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.039491 4705 scope.go:117] "RemoveContainer" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.040286 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": container with ID starting with 5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90 not found: ID does not exist" containerID="5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.040358 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90"} err="failed to get container status \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": rpc error: code = NotFound desc = could not find container \"5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90\": container with ID starting with 5d54959f43ed825129b69370a4b4403d50c3a08cf1c04388a12ea5f47ae6fb90 not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.040442 4705 scope.go:117] "RemoveContainer" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.041195 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": container with ID starting with 81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077 not found: ID does not exist" containerID="81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041235 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077"} err="failed to get container status \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": rpc error: code = NotFound desc = could not find container \"81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077\": container with ID starting with 81c2544de405fbe2ad811883be92ce1bdcd309e77b406bd384e58be747cbf077 not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041263 4705 scope.go:117] "RemoveContainer" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: E0216 15:40:56.041775 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": container with ID starting with 2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb not found: ID does not exist" containerID="2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.041898 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb"} err="failed to get container status \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": rpc error: code = NotFound desc = could not find container \"2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb\": container with ID starting with 2141e06065c9638d7d10768b35106f406e3222c908d8e60dd34b12114cd841cb not found: ID does not exist" Feb 16 15:40:56 crc kubenswrapper[4705]: I0216 15:40:56.442391 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39635490-f866-4108-9281-6105560b35a2" path="/var/lib/kubelet/pods/39635490-f866-4108-9281-6105560b35a2/volumes" Feb 16 15:40:59 crc kubenswrapper[4705]: E0216 15:40:59.423515 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:06 crc kubenswrapper[4705]: E0216 15:41:06.432276 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:11 crc kubenswrapper[4705]: E0216 15:41:11.422406 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:21 crc kubenswrapper[4705]: E0216 15:41:21.424071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:25 crc kubenswrapper[4705]: E0216 15:41:25.422750 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.340516 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342160 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342179 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342201 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342210 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342231 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342240 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-content" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342265 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342274 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342294 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342304 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="extract-utilities" Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.342333 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.342341 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.343756 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="39635490-f866-4108-9281-6105560b35a2" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.343790 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f237e260-e672-4b6e-8c0d-1fea39f1724f" containerName="registry-server" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.347162 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.369938 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:36 crc kubenswrapper[4705]: E0216 15:41:36.431704 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.478915 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.479140 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.479214 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583177 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583317 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.583340 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.584065 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.584604 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.615169 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"redhat-marketplace-7ffz8\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:36 crc kubenswrapper[4705]: I0216 15:41:36.687331 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.368131 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.613604 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} Feb 16 15:41:37 crc kubenswrapper[4705]: I0216 15:41:37.613682 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"9c3f94922ab40aed56fdd237b60b4af28ecc566a8e21d1d0b407ff4b18711778"} Feb 16 15:41:38 crc kubenswrapper[4705]: I0216 15:41:38.629394 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" exitCode=0 Feb 16 15:41:38 crc kubenswrapper[4705]: I0216 15:41:38.629463 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} Feb 16 15:41:39 crc kubenswrapper[4705]: E0216 15:41:39.421791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:39 crc kubenswrapper[4705]: I0216 15:41:39.641762 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} Feb 16 15:41:40 crc kubenswrapper[4705]: I0216 15:41:40.657520 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" exitCode=0 Feb 16 15:41:40 crc kubenswrapper[4705]: I0216 15:41:40.657615 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} Feb 16 15:41:41 crc kubenswrapper[4705]: I0216 15:41:41.676707 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerStarted","Data":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.689010 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.689740 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.785441 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.817685 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ffz8" podStartSLOduration=8.364069997 podStartE2EDuration="10.817658438s" podCreationTimestamp="2026-02-16 15:41:36 +0000 UTC" firstStartedPulling="2026-02-16 15:41:38.63303931 +0000 UTC m=+2892.818016386" lastFinishedPulling="2026-02-16 15:41:41.086627751 +0000 UTC m=+2895.271604827" observedRunningTime="2026-02-16 15:41:41.717826597 +0000 UTC m=+2895.902803703" watchObservedRunningTime="2026-02-16 15:41:46.817658438 +0000 UTC m=+2901.002635524" Feb 16 15:41:46 crc kubenswrapper[4705]: I0216 15:41:46.849227 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:47 crc kubenswrapper[4705]: I0216 15:41:47.039830 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:47 crc kubenswrapper[4705]: E0216 15:41:47.423645 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:41:48 crc kubenswrapper[4705]: I0216 15:41:48.748710 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ffz8" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" containerID="cri-o://42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" gracePeriod=2 Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.389263 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435132 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435236 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.435390 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") pod \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\" (UID: \"38f0818c-3ed8-45c0-825d-90cbd55d5fb0\") " Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.449171 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j" (OuterVolumeSpecName: "kube-api-access-tmm2j") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "kube-api-access-tmm2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.465628 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities" (OuterVolumeSpecName: "utilities") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.496294 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38f0818c-3ed8-45c0-825d-90cbd55d5fb0" (UID: "38f0818c-3ed8-45c0-825d-90cbd55d5fb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539787 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539834 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.539847 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmm2j\" (UniqueName: \"kubernetes.io/projected/38f0818c-3ed8-45c0-825d-90cbd55d5fb0-kube-api-access-tmm2j\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762284 4705 generic.go:334] "Generic (PLEG): container finished" podID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" exitCode=0 Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762399 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ffz8" event={"ID":"38f0818c-3ed8-45c0-825d-90cbd55d5fb0","Type":"ContainerDied","Data":"9c3f94922ab40aed56fdd237b60b4af28ecc566a8e21d1d0b407ff4b18711778"} Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762419 4705 scope.go:117] "RemoveContainer" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.762434 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ffz8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.796299 4705 scope.go:117] "RemoveContainer" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.816398 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.828199 4705 scope.go:117] "RemoveContainer" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.829485 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ffz8"] Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886326 4705 scope.go:117] "RemoveContainer" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.886767 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": container with ID starting with 42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a not found: ID does not exist" containerID="42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886811 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a"} err="failed to get container status \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": rpc error: code = NotFound desc = could not find container \"42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a\": container with ID starting with 42d2ed165b937ebee4d819b3ab56cc48cb96d3a67807d5fc8d7c54af3415958a not found: ID does not exist" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.886839 4705 scope.go:117] "RemoveContainer" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.887094 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": container with ID starting with 7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8 not found: ID does not exist" containerID="7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887117 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8"} err="failed to get container status \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": rpc error: code = NotFound desc = could not find container \"7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8\": container with ID starting with 7e9a03b6386ece499ab08539262e26c1cb233c0c8aad4ec6416bb1f6013baff8 not found: ID does not exist" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887129 4705 scope.go:117] "RemoveContainer" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: E0216 15:41:49.887348 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": container with ID starting with 31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936 not found: ID does not exist" containerID="31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936" Feb 16 15:41:49 crc kubenswrapper[4705]: I0216 15:41:49.887402 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936"} err="failed to get container status \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": rpc error: code = NotFound desc = could not find container \"31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936\": container with ID starting with 31aee74eff6844c324b33fa045b5851e09a2571d808483e6e1f1cc95216ba936 not found: ID does not exist" Feb 16 15:41:50 crc kubenswrapper[4705]: E0216 15:41:50.421440 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:41:50 crc kubenswrapper[4705]: I0216 15:41:50.431947 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" path="/var/lib/kubelet/pods/38f0818c-3ed8-45c0-825d-90cbd55d5fb0/volumes" Feb 16 15:41:58 crc kubenswrapper[4705]: E0216 15:41:58.426723 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:02 crc kubenswrapper[4705]: E0216 15:42:02.422225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:10 crc kubenswrapper[4705]: E0216 15:42:10.423615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:13 crc kubenswrapper[4705]: E0216 15:42:13.421245 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:25 crc kubenswrapper[4705]: E0216 15:42:25.421497 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:28 crc kubenswrapper[4705]: E0216 15:42:28.422528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:31 crc kubenswrapper[4705]: I0216 15:42:31.685050 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:42:31 crc kubenswrapper[4705]: I0216 15:42:31.686092 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:42:39 crc kubenswrapper[4705]: E0216 15:42:39.423832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:40 crc kubenswrapper[4705]: E0216 15:42:40.423111 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:42:50 crc kubenswrapper[4705]: E0216 15:42:50.427942 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:42:55 crc kubenswrapper[4705]: E0216 15:42:55.422314 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:01 crc kubenswrapper[4705]: I0216 15:43:01.686644 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:01 crc kubenswrapper[4705]: I0216 15:43:01.687145 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:03 crc kubenswrapper[4705]: E0216 15:43:03.421789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:09 crc kubenswrapper[4705]: E0216 15:43:09.424247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:14 crc kubenswrapper[4705]: E0216 15:43:14.423720 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:23 crc kubenswrapper[4705]: E0216 15:43:23.423954 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:29 crc kubenswrapper[4705]: E0216 15:43:29.422022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.684398 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.685054 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.685146 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.686859 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:43:31 crc kubenswrapper[4705]: I0216 15:43:31.686992 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" gracePeriod=600 Feb 16 15:43:31 crc kubenswrapper[4705]: E0216 15:43:31.827851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076161 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" exitCode=0 Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076226 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24"} Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.076282 4705 scope.go:117] "RemoveContainer" containerID="8557e8f40d8373ef2bcef970e2d4c8225fcaa8f8f6cb555d192755351fbc25c6" Feb 16 15:43:32 crc kubenswrapper[4705]: I0216 15:43:32.077351 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:43:32 crc kubenswrapper[4705]: E0216 15:43:32.077890 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:38 crc kubenswrapper[4705]: E0216 15:43:38.458244 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:43:39 crc kubenswrapper[4705]: I0216 15:43:39.196021 4705 generic.go:334] "Generic (PLEG): container finished" podID="5c695fba-8bed-4549-98f9-b708893eab8e" containerID="339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac" exitCode=2 Feb 16 15:43:39 crc kubenswrapper[4705]: I0216 15:43:39.196112 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerDied","Data":"339d2e080c59916666037b9af2a07a18342b8dd23aa94129299a7fe3384903ac"} Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.775037 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.836509 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.836945 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.837069 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") pod \"5c695fba-8bed-4549-98f9-b708893eab8e\" (UID: \"5c695fba-8bed-4549-98f9-b708893eab8e\") " Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.861795 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2" (OuterVolumeSpecName: "kube-api-access-cx4k2") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "kube-api-access-cx4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.893878 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory" (OuterVolumeSpecName: "inventory") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.899108 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c695fba-8bed-4549-98f9-b708893eab8e" (UID: "5c695fba-8bed-4549-98f9-b708893eab8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940579 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx4k2\" (UniqueName: \"kubernetes.io/projected/5c695fba-8bed-4549-98f9-b708893eab8e-kube-api-access-cx4k2\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940615 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:40 crc kubenswrapper[4705]: I0216 15:43:40.940625 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c695fba-8bed-4549-98f9-b708893eab8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232005 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" event={"ID":"5c695fba-8bed-4549-98f9-b708893eab8e","Type":"ContainerDied","Data":"c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615"} Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232617 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c431d84f3d2588c6cedef387fab4e7ebeb4c121e39cfb3ea48ace1861434f615" Feb 16 15:43:41 crc kubenswrapper[4705]: I0216 15:43:41.232230 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx" Feb 16 15:43:41 crc kubenswrapper[4705]: E0216 15:43:41.424138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:45 crc kubenswrapper[4705]: I0216 15:43:45.420690 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:43:45 crc kubenswrapper[4705]: E0216 15:43:45.421528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:43:52 crc kubenswrapper[4705]: E0216 15:43:52.422987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:43:53 crc kubenswrapper[4705]: E0216 15:43:53.421649 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:00 crc kubenswrapper[4705]: I0216 15:44:00.424137 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:00 crc kubenswrapper[4705]: E0216 15:44:00.425754 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:03 crc kubenswrapper[4705]: E0216 15:44:03.422902 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:04 crc kubenswrapper[4705]: E0216 15:44:04.421629 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:12 crc kubenswrapper[4705]: I0216 15:44:12.420872 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:12 crc kubenswrapper[4705]: E0216 15:44:12.421857 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:15 crc kubenswrapper[4705]: E0216 15:44:15.421589 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.043496 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044442 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-utilities" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-utilities" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044482 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044488 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044501 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044508 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: E0216 15:44:18.044536 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-content" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044542 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="extract-content" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044910 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f0818c-3ed8-45c0-825d-90cbd55d5fb0" containerName="registry-server" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.044925 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c695fba-8bed-4549-98f9-b708893eab8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.046005 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.049828 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.050199 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.052356 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.052702 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.057066 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196247 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.196506 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299054 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299203 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.299228 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.305971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.306038 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.317483 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.369555 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:44:18 crc kubenswrapper[4705]: I0216 15:44:18.932094 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq"] Feb 16 15:44:19 crc kubenswrapper[4705]: E0216 15:44:19.424079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.740839 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerStarted","Data":"dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a"} Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.741143 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerStarted","Data":"842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4"} Feb 16 15:44:19 crc kubenswrapper[4705]: I0216 15:44:19.760727 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" podStartSLOduration=1.331502704 podStartE2EDuration="1.760697867s" podCreationTimestamp="2026-02-16 15:44:18 +0000 UTC" firstStartedPulling="2026-02-16 15:44:18.935786016 +0000 UTC m=+3053.120763092" lastFinishedPulling="2026-02-16 15:44:19.364981169 +0000 UTC m=+3053.549958255" observedRunningTime="2026-02-16 15:44:19.756142618 +0000 UTC m=+3053.941119694" watchObservedRunningTime="2026-02-16 15:44:19.760697867 +0000 UTC m=+3053.945674943" Feb 16 15:44:24 crc kubenswrapper[4705]: I0216 15:44:24.419560 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:24 crc kubenswrapper[4705]: E0216 15:44:24.420452 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:26 crc kubenswrapper[4705]: E0216 15:44:26.437638 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:33 crc kubenswrapper[4705]: E0216 15:44:33.422959 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:36 crc kubenswrapper[4705]: I0216 15:44:36.430336 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:36 crc kubenswrapper[4705]: E0216 15:44:36.433056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:39 crc kubenswrapper[4705]: E0216 15:44:39.423772 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:47 crc kubenswrapper[4705]: E0216 15:44:47.422134 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:44:49 crc kubenswrapper[4705]: I0216 15:44:49.420887 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:44:49 crc kubenswrapper[4705]: E0216 15:44:49.421934 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:44:51 crc kubenswrapper[4705]: E0216 15:44:51.422693 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:44:59 crc kubenswrapper[4705]: E0216 15:44:59.422752 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.168806 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.171030 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.173132 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.173235 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181577 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181645 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.181806 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.191778 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284151 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284565 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.284663 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.285520 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.296840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.302854 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"collect-profiles-29520945-hpffs\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.420628 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:00 crc kubenswrapper[4705]: E0216 15:45:00.421260 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:00 crc kubenswrapper[4705]: I0216 15:45:00.497559 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:01 crc kubenswrapper[4705]: I0216 15:45:01.541813 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546667 4705 generic.go:334] "Generic (PLEG): container finished" podID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerID="c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408" exitCode=0 Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546771 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerDied","Data":"c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408"} Feb 16 15:45:02 crc kubenswrapper[4705]: I0216 15:45:02.546996 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerStarted","Data":"506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c"} Feb 16 15:45:03 crc kubenswrapper[4705]: I0216 15:45:03.972872 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070254 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070359 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.070575 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") pod \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\" (UID: \"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd\") " Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.071758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume" (OuterVolumeSpecName: "config-volume") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.077017 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k" (OuterVolumeSpecName: "kube-api-access-gvn2k") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "kube-api-access-gvn2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.077540 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" (UID: "45c99b78-85e9-4a2f-bcc4-76fab1e86ccd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174513 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174565 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn2k\" (UniqueName: \"kubernetes.io/projected/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-kube-api-access-gvn2k\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.174575 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568342 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" event={"ID":"45c99b78-85e9-4a2f-bcc4-76fab1e86ccd","Type":"ContainerDied","Data":"506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c"} Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568660 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="506d6b1b7a668927ea41f719e81b48c781e2fcbf80489a2eb9a59dcb33bbc03c" Feb 16 15:45:04 crc kubenswrapper[4705]: I0216 15:45:04.568735 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs" Feb 16 15:45:05 crc kubenswrapper[4705]: I0216 15:45:05.065973 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:45:05 crc kubenswrapper[4705]: I0216 15:45:05.107550 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-6bdkx"] Feb 16 15:45:06 crc kubenswrapper[4705]: E0216 15:45:06.437266 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:06 crc kubenswrapper[4705]: I0216 15:45:06.440064 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c9b6f2-f412-4860-9524-8b671c477f83" path="/var/lib/kubelet/pods/24c9b6f2-f412-4860-9524-8b671c477f83/volumes" Feb 16 15:45:11 crc kubenswrapper[4705]: I0216 15:45:11.419479 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:11 crc kubenswrapper[4705]: E0216 15:45:11.420464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:14 crc kubenswrapper[4705]: E0216 15:45:14.429811 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:18 crc kubenswrapper[4705]: E0216 15:45:18.426549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:23 crc kubenswrapper[4705]: I0216 15:45:23.421178 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:23 crc kubenswrapper[4705]: E0216 15:45:23.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:29 crc kubenswrapper[4705]: E0216 15:45:29.423939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:32 crc kubenswrapper[4705]: E0216 15:45:32.424974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:35 crc kubenswrapper[4705]: I0216 15:45:35.139180 4705 scope.go:117] "RemoveContainer" containerID="6fb2c5a749e97a8125f039d31686c6310a49662f79ec4dbdd96faae30b6b0365" Feb 16 15:45:38 crc kubenswrapper[4705]: I0216 15:45:38.420447 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:38 crc kubenswrapper[4705]: E0216 15:45:38.422648 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:44 crc kubenswrapper[4705]: I0216 15:45:44.428451 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.937207 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.937883 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.938136 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.939874 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966114 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966210 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.966478 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:45:44 crc kubenswrapper[4705]: E0216 15:45:44.967820 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:49 crc kubenswrapper[4705]: I0216 15:45:49.421164 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:45:49 crc kubenswrapper[4705]: E0216 15:45:49.422906 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:45:57 crc kubenswrapper[4705]: E0216 15:45:57.423210 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:45:59 crc kubenswrapper[4705]: E0216 15:45:59.422120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:04 crc kubenswrapper[4705]: I0216 15:46:04.419881 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:04 crc kubenswrapper[4705]: E0216 15:46:04.420823 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:08 crc kubenswrapper[4705]: E0216 15:46:08.422495 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:14 crc kubenswrapper[4705]: E0216 15:46:14.423068 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:16 crc kubenswrapper[4705]: I0216 15:46:16.427975 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:16 crc kubenswrapper[4705]: E0216 15:46:16.428668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:22 crc kubenswrapper[4705]: E0216 15:46:22.422320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:27 crc kubenswrapper[4705]: E0216 15:46:27.420913 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:29 crc kubenswrapper[4705]: I0216 15:46:29.176993 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-85b76884b7-g4c57" podUID="811fab8b-dbb5-4985-b67f-d3671ea6ff9b" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 15:46:30 crc kubenswrapper[4705]: I0216 15:46:30.420591 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:30 crc kubenswrapper[4705]: E0216 15:46:30.421171 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:34 crc kubenswrapper[4705]: E0216 15:46:34.422941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:42 crc kubenswrapper[4705]: E0216 15:46:42.424962 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:43 crc kubenswrapper[4705]: I0216 15:46:43.421729 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:43 crc kubenswrapper[4705]: E0216 15:46:43.422166 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:46 crc kubenswrapper[4705]: E0216 15:46:46.432448 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:46:55 crc kubenswrapper[4705]: I0216 15:46:55.420352 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:46:55 crc kubenswrapper[4705]: E0216 15:46:55.421259 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:46:57 crc kubenswrapper[4705]: E0216 15:46:57.422744 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:46:58 crc kubenswrapper[4705]: E0216 15:46:58.423946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:08 crc kubenswrapper[4705]: E0216 15:47:08.422543 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:09 crc kubenswrapper[4705]: I0216 15:47:09.420776 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:09 crc kubenswrapper[4705]: E0216 15:47:09.422093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:10 crc kubenswrapper[4705]: E0216 15:47:10.423581 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.930267 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:16 crc kubenswrapper[4705]: E0216 15:47:16.931599 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.931617 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.931974 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" containerName="collect-profiles" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.934184 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.946018 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976201 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976469 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:16 crc kubenswrapper[4705]: I0216 15:47:16.976520 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079090 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079350 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.079421 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.080010 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.080237 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.101751 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"community-operators-nr2gj\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.273022 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:17 crc kubenswrapper[4705]: I0216 15:47:17.936306 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:18 crc kubenswrapper[4705]: I0216 15:47:18.180923 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"c815de1a318bd321678c402001f3fc4a11a959753a5ea9a79d6f02d5a2ff47ff"} Feb 16 15:47:19 crc kubenswrapper[4705]: I0216 15:47:19.206068 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" exitCode=0 Feb 16 15:47:19 crc kubenswrapper[4705]: I0216 15:47:19.206306 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14"} Feb 16 15:47:20 crc kubenswrapper[4705]: I0216 15:47:20.224035 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} Feb 16 15:47:20 crc kubenswrapper[4705]: I0216 15:47:20.420454 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:20 crc kubenswrapper[4705]: E0216 15:47:20.421526 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:20 crc kubenswrapper[4705]: E0216 15:47:20.421587 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:21 crc kubenswrapper[4705]: E0216 15:47:21.422423 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:22 crc kubenswrapper[4705]: I0216 15:47:22.248206 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" exitCode=0 Feb 16 15:47:22 crc kubenswrapper[4705]: I0216 15:47:22.248351 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} Feb 16 15:47:23 crc kubenswrapper[4705]: I0216 15:47:23.261245 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerStarted","Data":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} Feb 16 15:47:23 crc kubenswrapper[4705]: I0216 15:47:23.294738 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nr2gj" podStartSLOduration=3.826215517 podStartE2EDuration="7.294720852s" podCreationTimestamp="2026-02-16 15:47:16 +0000 UTC" firstStartedPulling="2026-02-16 15:47:19.210785673 +0000 UTC m=+3233.395762749" lastFinishedPulling="2026-02-16 15:47:22.679290968 +0000 UTC m=+3236.864268084" observedRunningTime="2026-02-16 15:47:23.288087395 +0000 UTC m=+3237.473064551" watchObservedRunningTime="2026-02-16 15:47:23.294720852 +0000 UTC m=+3237.479697928" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.274640 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.275607 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.349743 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.407083 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:27 crc kubenswrapper[4705]: I0216 15:47:27.614452 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.331326 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nr2gj" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" containerID="cri-o://3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" gracePeriod=2 Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.900668 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981384 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981861 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.981992 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") pod \"f830efc9-fda9-4d23-9348-7f07420d7006\" (UID: \"f830efc9-fda9-4d23-9348-7f07420d7006\") " Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.982268 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities" (OuterVolumeSpecName: "utilities") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.983082 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:29 crc kubenswrapper[4705]: I0216 15:47:29.988435 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt" (OuterVolumeSpecName: "kube-api-access-mhwwt") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "kube-api-access-mhwwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.035242 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f830efc9-fda9-4d23-9348-7f07420d7006" (UID: "f830efc9-fda9-4d23-9348-7f07420d7006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.084332 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830efc9-fda9-4d23-9348-7f07420d7006-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.084416 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhwwt\" (UniqueName: \"kubernetes.io/projected/f830efc9-fda9-4d23-9348-7f07420d7006-kube-api-access-mhwwt\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347397 4705 generic.go:334] "Generic (PLEG): container finished" podID="f830efc9-fda9-4d23-9348-7f07420d7006" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" exitCode=0 Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347481 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347539 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr2gj" event={"ID":"f830efc9-fda9-4d23-9348-7f07420d7006","Type":"ContainerDied","Data":"c815de1a318bd321678c402001f3fc4a11a959753a5ea9a79d6f02d5a2ff47ff"} Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347539 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr2gj" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.347567 4705 scope.go:117] "RemoveContainer" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.381134 4705 scope.go:117] "RemoveContainer" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.418741 4705 scope.go:117] "RemoveContainer" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.439128 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.439184 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nr2gj"] Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.481271 4705 scope.go:117] "RemoveContainer" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.482075 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": container with ID starting with 3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b not found: ID does not exist" containerID="3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482152 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b"} err="failed to get container status \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": rpc error: code = NotFound desc = could not find container \"3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b\": container with ID starting with 3fc85cc883986d5ad92d82b1a672f1d261cc2950801f855f2e59783dc302dd0b not found: ID does not exist" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482191 4705 scope.go:117] "RemoveContainer" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.482636 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": container with ID starting with ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41 not found: ID does not exist" containerID="ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482686 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41"} err="failed to get container status \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": rpc error: code = NotFound desc = could not find container \"ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41\": container with ID starting with ea407fd29479a2ddd4a2b024092719b6cd812821af1199c09429037627566f41 not found: ID does not exist" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.482717 4705 scope.go:117] "RemoveContainer" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: E0216 15:47:30.483016 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": container with ID starting with 21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14 not found: ID does not exist" containerID="21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14" Feb 16 15:47:30 crc kubenswrapper[4705]: I0216 15:47:30.483046 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14"} err="failed to get container status \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": rpc error: code = NotFound desc = could not find container \"21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14\": container with ID starting with 21316d8f9c6d17a28fcb85c66f9b64db24b0495b582fea063a01042843ac4d14 not found: ID does not exist" Feb 16 15:47:31 crc kubenswrapper[4705]: E0216 15:47:31.422895 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:32 crc kubenswrapper[4705]: I0216 15:47:32.431461 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" path="/var/lib/kubelet/pods/f830efc9-fda9-4d23-9348-7f07420d7006/volumes" Feb 16 15:47:34 crc kubenswrapper[4705]: E0216 15:47:34.422129 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:35 crc kubenswrapper[4705]: I0216 15:47:35.419898 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:35 crc kubenswrapper[4705]: E0216 15:47:35.420437 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:45 crc kubenswrapper[4705]: E0216 15:47:45.423549 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:47:49 crc kubenswrapper[4705]: E0216 15:47:49.423302 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:47:50 crc kubenswrapper[4705]: I0216 15:47:50.422422 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:47:50 crc kubenswrapper[4705]: E0216 15:47:50.423044 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:47:59 crc kubenswrapper[4705]: E0216 15:47:59.423818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:03 crc kubenswrapper[4705]: I0216 15:48:03.420228 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:03 crc kubenswrapper[4705]: E0216 15:48:03.421164 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:03 crc kubenswrapper[4705]: E0216 15:48:03.421687 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:14 crc kubenswrapper[4705]: E0216 15:48:14.423153 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:16 crc kubenswrapper[4705]: E0216 15:48:16.432355 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:17 crc kubenswrapper[4705]: I0216 15:48:17.419665 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:17 crc kubenswrapper[4705]: E0216 15:48:17.420511 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:27 crc kubenswrapper[4705]: E0216 15:48:27.424197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:27 crc kubenswrapper[4705]: E0216 15:48:27.424272 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:28 crc kubenswrapper[4705]: I0216 15:48:28.421147 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:28 crc kubenswrapper[4705]: E0216 15:48:28.422289 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:48:39 crc kubenswrapper[4705]: E0216 15:48:39.423231 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:41 crc kubenswrapper[4705]: E0216 15:48:41.421735 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:48:42 crc kubenswrapper[4705]: I0216 15:48:42.419998 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:48:43 crc kubenswrapper[4705]: I0216 15:48:43.341608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} Feb 16 15:48:54 crc kubenswrapper[4705]: E0216 15:48:54.424479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:48:55 crc kubenswrapper[4705]: E0216 15:48:55.423012 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:05 crc kubenswrapper[4705]: E0216 15:49:05.423014 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:08 crc kubenswrapper[4705]: E0216 15:49:08.423196 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:17 crc kubenswrapper[4705]: E0216 15:49:17.425187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:23 crc kubenswrapper[4705]: E0216 15:49:23.423148 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:32 crc kubenswrapper[4705]: E0216 15:49:32.422466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:34 crc kubenswrapper[4705]: E0216 15:49:34.421815 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:45 crc kubenswrapper[4705]: E0216 15:49:45.422250 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:49:47 crc kubenswrapper[4705]: E0216 15:49:47.424615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:59 crc kubenswrapper[4705]: E0216 15:49:59.424978 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:49:59 crc kubenswrapper[4705]: E0216 15:49:59.425258 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:12 crc kubenswrapper[4705]: E0216 15:50:12.422255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:14 crc kubenswrapper[4705]: E0216 15:50:14.421759 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:23 crc kubenswrapper[4705]: E0216 15:50:23.422083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:27 crc kubenswrapper[4705]: E0216 15:50:27.422818 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.254489 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255562 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-utilities" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255576 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-utilities" Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255587 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-content" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255593 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="extract-content" Feb 16 15:50:31 crc kubenswrapper[4705]: E0216 15:50:31.255606 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255614 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.255850 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f830efc9-fda9-4d23-9348-7f07420d7006" containerName="registry-server" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.257634 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.272305 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.332770 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.332839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.333057 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.435943 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.436032 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.436102 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.437009 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.437251 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.455485 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"redhat-operators-bmp9d\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:31 crc kubenswrapper[4705]: I0216 15:50:31.630073 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.154255 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.734933 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" exitCode=0 Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.735278 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d"} Feb 16 15:50:32 crc kubenswrapper[4705]: I0216 15:50:32.735330 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"449e1847ce5c6224e7f6503e083f2d4afc066c34398cfa6124ed5426ddeb28b3"} Feb 16 15:50:34 crc kubenswrapper[4705]: I0216 15:50:34.760056 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} Feb 16 15:50:35 crc kubenswrapper[4705]: I0216 15:50:35.777137 4705 generic.go:334] "Generic (PLEG): container finished" podID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerID="dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a" exitCode=2 Feb 16 15:50:35 crc kubenswrapper[4705]: I0216 15:50:35.777240 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerDied","Data":"dd397664545fa9d1d67f27582e093a084832d9e2d00b116935a393b711efe37a"} Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.343102 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.401876 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.402092 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.402122 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") pod \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\" (UID: \"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f\") " Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.413702 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl" (OuterVolumeSpecName: "kube-api-access-km2dl") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "kube-api-access-km2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.434748 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory" (OuterVolumeSpecName: "inventory") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.435837 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" (UID: "df22a5a3-55ac-4d51-99bb-c6624cd8ba8f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506406 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km2dl\" (UniqueName: \"kubernetes.io/projected/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-kube-api-access-km2dl\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506448 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.506459 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df22a5a3-55ac-4d51-99bb-c6624cd8ba8f-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" event={"ID":"df22a5a3-55ac-4d51-99bb-c6624cd8ba8f","Type":"ContainerDied","Data":"842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4"} Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802132 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842d1b8af9655d4e70d2597153b8f55857f772830bba8347b474673654c258f4" Feb 16 15:50:37 crc kubenswrapper[4705]: I0216 15:50:37.802143 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq" Feb 16 15:50:38 crc kubenswrapper[4705]: E0216 15:50:38.426725 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:38 crc kubenswrapper[4705]: I0216 15:50:38.816781 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" exitCode=0 Feb 16 15:50:38 crc kubenswrapper[4705]: I0216 15:50:38.816860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} Feb 16 15:50:39 crc kubenswrapper[4705]: E0216 15:50:39.422874 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:40 crc kubenswrapper[4705]: I0216 15:50:40.841959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerStarted","Data":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} Feb 16 15:50:40 crc kubenswrapper[4705]: I0216 15:50:40.873153 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bmp9d" podStartSLOduration=2.41151906 podStartE2EDuration="9.873135336s" podCreationTimestamp="2026-02-16 15:50:31 +0000 UTC" firstStartedPulling="2026-02-16 15:50:32.738252364 +0000 UTC m=+3426.923229460" lastFinishedPulling="2026-02-16 15:50:40.19986865 +0000 UTC m=+3434.384845736" observedRunningTime="2026-02-16 15:50:40.868147035 +0000 UTC m=+3435.053124131" watchObservedRunningTime="2026-02-16 15:50:40.873135336 +0000 UTC m=+3435.058112412" Feb 16 15:50:41 crc kubenswrapper[4705]: I0216 15:50:41.630284 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:41 crc kubenswrapper[4705]: I0216 15:50:41.630332 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:42 crc kubenswrapper[4705]: I0216 15:50:42.695023 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bmp9d" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" probeResult="failure" output=< Feb 16 15:50:42 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 15:50:42 crc kubenswrapper[4705]: > Feb 16 15:50:49 crc kubenswrapper[4705]: I0216 15:50:49.423088 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.551808 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.551883 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.552026 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:50:49 crc kubenswrapper[4705]: E0216 15:50:49.553223 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.703197 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.785196 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:51 crc kubenswrapper[4705]: I0216 15:50:51.950773 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:52 crc kubenswrapper[4705]: I0216 15:50:52.992152 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bmp9d" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" containerID="cri-o://dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" gracePeriod=2 Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.519430 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.670647 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.670791 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.671040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") pod \"b012865d-7789-4025-b085-85099262b2e7\" (UID: \"b012865d-7789-4025-b085-85099262b2e7\") " Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.671830 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities" (OuterVolumeSpecName: "utilities") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.677667 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd" (OuterVolumeSpecName: "kube-api-access-6bgkd") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "kube-api-access-6bgkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.773698 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bgkd\" (UniqueName: \"kubernetes.io/projected/b012865d-7789-4025-b085-85099262b2e7-kube-api-access-6bgkd\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.773740 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.812828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b012865d-7789-4025-b085-85099262b2e7" (UID: "b012865d-7789-4025-b085-85099262b2e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:53 crc kubenswrapper[4705]: I0216 15:50:53.875936 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b012865d-7789-4025-b085-85099262b2e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011080 4705 generic.go:334] "Generic (PLEG): container finished" podID="b012865d-7789-4025-b085-85099262b2e7" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" exitCode=0 Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011124 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011153 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmp9d" event={"ID":"b012865d-7789-4025-b085-85099262b2e7","Type":"ContainerDied","Data":"449e1847ce5c6224e7f6503e083f2d4afc066c34398cfa6124ed5426ddeb28b3"} Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011171 4705 scope.go:117] "RemoveContainer" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.011181 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmp9d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.053471 4705 scope.go:117] "RemoveContainer" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.059873 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.081119 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bmp9d"] Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.083693 4705 scope.go:117] "RemoveContainer" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.138464 4705 scope.go:117] "RemoveContainer" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.138974 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": container with ID starting with dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01 not found: ID does not exist" containerID="dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139005 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01"} err="failed to get container status \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": rpc error: code = NotFound desc = could not find container \"dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01\": container with ID starting with dcb6ce468d68afa3f600a3d6ac724115f7301c9d9192d11b7d6294aadf5d3b01 not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139025 4705 scope.go:117] "RemoveContainer" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.139581 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": container with ID starting with 10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb not found: ID does not exist" containerID="10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139628 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb"} err="failed to get container status \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": rpc error: code = NotFound desc = could not find container \"10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb\": container with ID starting with 10c1295a8b6c1949739e3f43caf1c8a137ef4d3886448d74d913e1e43deb6ceb not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139644 4705 scope.go:117] "RemoveContainer" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.139966 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": container with ID starting with 730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d not found: ID does not exist" containerID="730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.139985 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d"} err="failed to get container status \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": rpc error: code = NotFound desc = could not find container \"730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d\": container with ID starting with 730535f0893f84a7cb8e6f584a5fe6a204c0f91b9ec622489c286ea4ddb0da8d not found: ID does not exist" Feb 16 15:50:54 crc kubenswrapper[4705]: I0216 15:50:54.437195 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b012865d-7789-4025-b085-85099262b2e7" path="/var/lib/kubelet/pods/b012865d-7789-4025-b085-85099262b2e7/volumes" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538222 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538308 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.538516 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:50:54 crc kubenswrapper[4705]: E0216 15:50:54.539758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.010870 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012224 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012240 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012255 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012262 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012297 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-utilities" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012305 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-utilities" Feb 16 15:50:58 crc kubenswrapper[4705]: E0216 15:50:58.012317 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-content" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012324 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="extract-content" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012785 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b012865d-7789-4025-b085-85099262b2e7" containerName="registry-server" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.012801 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="df22a5a3-55ac-4d51-99bb-c6624cd8ba8f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.014558 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.044543 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193056 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193112 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.193560 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.295868 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296094 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296134 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.296994 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.297003 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.316285 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"certified-operators-mmp5l\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.345002 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:50:58 crc kubenswrapper[4705]: W0216 15:50:58.828697 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f0ae40_309a_42b8_b7c3_63d7d0dccdd4.slice/crio-c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f WatchSource:0}: Error finding container c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f: Status 404 returned error can't find the container with id c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f Feb 16 15:50:58 crc kubenswrapper[4705]: I0216 15:50:58.831341 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081162 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" exitCode=0 Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081357 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60"} Feb 16 15:50:59 crc kubenswrapper[4705]: I0216 15:50:59.081513 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f"} Feb 16 15:51:00 crc kubenswrapper[4705]: I0216 15:51:00.095084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} Feb 16 15:51:01 crc kubenswrapper[4705]: I0216 15:51:01.684094 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:51:01 crc kubenswrapper[4705]: I0216 15:51:01.684573 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:51:02 crc kubenswrapper[4705]: I0216 15:51:02.115747 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" exitCode=0 Feb 16 15:51:02 crc kubenswrapper[4705]: I0216 15:51:02.115798 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} Feb 16 15:51:03 crc kubenswrapper[4705]: I0216 15:51:03.130042 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerStarted","Data":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} Feb 16 15:51:03 crc kubenswrapper[4705]: I0216 15:51:03.156583 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mmp5l" podStartSLOduration=2.724151968 podStartE2EDuration="6.156560821s" podCreationTimestamp="2026-02-16 15:50:57 +0000 UTC" firstStartedPulling="2026-02-16 15:50:59.083466954 +0000 UTC m=+3453.268444030" lastFinishedPulling="2026-02-16 15:51:02.515875807 +0000 UTC m=+3456.700852883" observedRunningTime="2026-02-16 15:51:03.149779769 +0000 UTC m=+3457.334756865" watchObservedRunningTime="2026-02-16 15:51:03.156560821 +0000 UTC m=+3457.341537897" Feb 16 15:51:03 crc kubenswrapper[4705]: E0216 15:51:03.420673 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.345951 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.346496 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:08 crc kubenswrapper[4705]: I0216 15:51:08.409648 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:09 crc kubenswrapper[4705]: I0216 15:51:09.281607 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:09 crc kubenswrapper[4705]: I0216 15:51:09.362426 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:09 crc kubenswrapper[4705]: E0216 15:51:09.422881 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.228035 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mmp5l" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" containerID="cri-o://4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" gracePeriod=2 Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.798272 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982307 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982505 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.982571 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") pod \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\" (UID: \"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4\") " Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.983204 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities" (OuterVolumeSpecName: "utilities") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:11 crc kubenswrapper[4705]: I0216 15:51:11.988654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542" (OuterVolumeSpecName: "kube-api-access-65542") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "kube-api-access-65542". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.045676 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" (UID: "b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085709 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65542\" (UniqueName: \"kubernetes.io/projected/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-kube-api-access-65542\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085750 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.085762 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.242921 4705 generic.go:334] "Generic (PLEG): container finished" podID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" exitCode=0 Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.243566 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.244079 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmp5l" event={"ID":"b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4","Type":"ContainerDied","Data":"c8f612fe70ebf71b578a664801df48627ffc6f17a288780dcd987a707f76549f"} Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.243669 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmp5l" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.244131 4705 scope.go:117] "RemoveContainer" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.265688 4705 scope.go:117] "RemoveContainer" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.291598 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.306166 4705 scope.go:117] "RemoveContainer" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.310871 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mmp5l"] Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383046 4705 scope.go:117] "RemoveContainer" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.383586 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": container with ID starting with 4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67 not found: ID does not exist" containerID="4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383621 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67"} err="failed to get container status \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": rpc error: code = NotFound desc = could not find container \"4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67\": container with ID starting with 4870d4f8f6f17172e268c3dc00f8becd6a68acb04693376f568c2e718427df67 not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.383645 4705 scope.go:117] "RemoveContainer" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.384033 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": container with ID starting with 9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d not found: ID does not exist" containerID="9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384057 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d"} err="failed to get container status \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": rpc error: code = NotFound desc = could not find container \"9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d\": container with ID starting with 9f44bb96333465a8fa75a85acc17225e3c61745b119dda9858016fa235ba425d not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384072 4705 scope.go:117] "RemoveContainer" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: E0216 15:51:12.384597 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": container with ID starting with c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60 not found: ID does not exist" containerID="c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.384617 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60"} err="failed to get container status \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": rpc error: code = NotFound desc = could not find container \"c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60\": container with ID starting with c940683f9767b492bd62c4c2621eb86d3cc9a4aeff6f18b285337c355b5d4b60 not found: ID does not exist" Feb 16 15:51:12 crc kubenswrapper[4705]: I0216 15:51:12.431899 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" path="/var/lib/kubelet/pods/b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4/volumes" Feb 16 15:51:17 crc kubenswrapper[4705]: E0216 15:51:17.423226 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:20 crc kubenswrapper[4705]: E0216 15:51:20.422535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:30 crc kubenswrapper[4705]: E0216 15:51:30.422692 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:31 crc kubenswrapper[4705]: I0216 15:51:31.684708 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:51:31 crc kubenswrapper[4705]: I0216 15:51:31.685170 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:51:33 crc kubenswrapper[4705]: E0216 15:51:33.421909 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:43 crc kubenswrapper[4705]: E0216 15:51:43.423609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:47 crc kubenswrapper[4705]: E0216 15:51:47.423320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.048428 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.051722 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-content" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.051857 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-content" Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.051974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-utilities" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052053 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="extract-utilities" Feb 16 15:51:55 crc kubenswrapper[4705]: E0216 15:51:55.052174 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052260 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.052731 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f0ae40-309a-42b8-b7c3-63d7d0dccdd4" containerName="registry-server" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.054289 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.056933 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.057215 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.057419 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.058214 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.076793 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228669 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228863 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.228926 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.331746 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.331942 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.332139 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.339422 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.340018 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.357901 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49hkn\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.388180 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:51:55 crc kubenswrapper[4705]: I0216 15:51:55.997292 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn"] Feb 16 15:51:56 crc kubenswrapper[4705]: I0216 15:51:56.758298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerStarted","Data":"7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c"} Feb 16 15:51:57 crc kubenswrapper[4705]: E0216 15:51:57.423594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:51:57 crc kubenswrapper[4705]: I0216 15:51:57.778424 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerStarted","Data":"e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98"} Feb 16 15:51:57 crc kubenswrapper[4705]: I0216 15:51:57.818466 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" podStartSLOduration=2.237688361 podStartE2EDuration="2.818427308s" podCreationTimestamp="2026-02-16 15:51:55 +0000 UTC" firstStartedPulling="2026-02-16 15:51:56.004699985 +0000 UTC m=+3510.189677061" lastFinishedPulling="2026-02-16 15:51:56.585438932 +0000 UTC m=+3510.770416008" observedRunningTime="2026-02-16 15:51:57.80965878 +0000 UTC m=+3511.994635856" watchObservedRunningTime="2026-02-16 15:51:57.818427308 +0000 UTC m=+3512.003404404" Feb 16 15:52:01 crc kubenswrapper[4705]: E0216 15:52:01.422653 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.686959 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.687645 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.687748 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.689918 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.690065 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" gracePeriod=600 Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824467 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" exitCode=0 Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824528 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8"} Feb 16 15:52:01 crc kubenswrapper[4705]: I0216 15:52:01.824571 4705 scope.go:117] "RemoveContainer" containerID="33e0a751c3e610a6b3bdfeddb1c861d123873601bc7a60c01886342fee1f2c24" Feb 16 15:52:02 crc kubenswrapper[4705]: I0216 15:52:02.839164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} Feb 16 15:52:09 crc kubenswrapper[4705]: E0216 15:52:09.423694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:15 crc kubenswrapper[4705]: E0216 15:52:15.422402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:24 crc kubenswrapper[4705]: E0216 15:52:24.424524 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:27 crc kubenswrapper[4705]: E0216 15:52:27.421751 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:38 crc kubenswrapper[4705]: E0216 15:52:38.422095 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:52:40 crc kubenswrapper[4705]: E0216 15:52:40.421939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:53 crc kubenswrapper[4705]: E0216 15:52:53.422198 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:52:53 crc kubenswrapper[4705]: E0216 15:52:53.422197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:05 crc kubenswrapper[4705]: E0216 15:53:05.423826 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:07 crc kubenswrapper[4705]: E0216 15:53:07.421528 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:17 crc kubenswrapper[4705]: E0216 15:53:17.421562 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:19 crc kubenswrapper[4705]: E0216 15:53:19.421209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:30 crc kubenswrapper[4705]: E0216 15:53:30.423781 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:31 crc kubenswrapper[4705]: E0216 15:53:31.422376 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:42 crc kubenswrapper[4705]: E0216 15:53:42.422030 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:46 crc kubenswrapper[4705]: E0216 15:53:46.431584 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:53:53 crc kubenswrapper[4705]: E0216 15:53:53.423944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:53:59 crc kubenswrapper[4705]: E0216 15:53:59.422602 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:08 crc kubenswrapper[4705]: E0216 15:54:08.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:11 crc kubenswrapper[4705]: E0216 15:54:11.422688 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:23 crc kubenswrapper[4705]: E0216 15:54:23.422851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:25 crc kubenswrapper[4705]: E0216 15:54:25.421946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:31 crc kubenswrapper[4705]: I0216 15:54:31.684083 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:54:31 crc kubenswrapper[4705]: I0216 15:54:31.684712 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:54:36 crc kubenswrapper[4705]: E0216 15:54:36.428535 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:38 crc kubenswrapper[4705]: E0216 15:54:38.441565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:54:50 crc kubenswrapper[4705]: E0216 15:54:50.423846 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:54:50 crc kubenswrapper[4705]: E0216 15:54:50.427317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:01 crc kubenswrapper[4705]: E0216 15:55:01.422068 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:01 crc kubenswrapper[4705]: I0216 15:55:01.684710 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:01 crc kubenswrapper[4705]: I0216 15:55:01.684759 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:55:03 crc kubenswrapper[4705]: E0216 15:55:03.422626 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:12 crc kubenswrapper[4705]: E0216 15:55:12.425169 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:15 crc kubenswrapper[4705]: E0216 15:55:15.422136 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:23 crc kubenswrapper[4705]: E0216 15:55:23.423199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:29 crc kubenswrapper[4705]: E0216 15:55:29.422737 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.683895 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.684448 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.684494 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.685389 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:55:31 crc kubenswrapper[4705]: I0216 15:55:31.685438 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" gracePeriod=600 Feb 16 15:55:31 crc kubenswrapper[4705]: E0216 15:55:31.808548 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462608 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" exitCode=0 Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462668 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1"} Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.462726 4705 scope.go:117] "RemoveContainer" containerID="4a9a2e2f883a2d51f28d2fe4041e4e904adcebfb7379e631823880828e02e2b8" Feb 16 15:55:32 crc kubenswrapper[4705]: I0216 15:55:32.464818 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:32 crc kubenswrapper[4705]: E0216 15:55:32.465219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:34 crc kubenswrapper[4705]: E0216 15:55:34.423734 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:40 crc kubenswrapper[4705]: E0216 15:55:40.421946 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:44 crc kubenswrapper[4705]: I0216 15:55:44.419272 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:44 crc kubenswrapper[4705]: E0216 15:55:44.421178 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:55:49 crc kubenswrapper[4705]: E0216 15:55:49.441941 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:55:52 crc kubenswrapper[4705]: I0216 15:55:52.424043 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.546850 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.546935 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.547094 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:55:52 crc kubenswrapper[4705]: E0216 15:55:52.549131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:55:59 crc kubenswrapper[4705]: I0216 15:55:59.420439 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:55:59 crc kubenswrapper[4705]: E0216 15:55:59.421309 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.514986 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.515642 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.515803 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:56:03 crc kubenswrapper[4705]: E0216 15:56:03.516918 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:05 crc kubenswrapper[4705]: E0216 15:56:05.422399 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:12 crc kubenswrapper[4705]: I0216 15:56:12.419682 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:12 crc kubenswrapper[4705]: E0216 15:56:12.420503 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:16 crc kubenswrapper[4705]: E0216 15:56:16.437900 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:18 crc kubenswrapper[4705]: E0216 15:56:18.422711 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:25 crc kubenswrapper[4705]: I0216 15:56:25.419278 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:25 crc kubenswrapper[4705]: E0216 15:56:25.420050 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:29 crc kubenswrapper[4705]: E0216 15:56:29.423498 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:32 crc kubenswrapper[4705]: E0216 15:56:32.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:37 crc kubenswrapper[4705]: I0216 15:56:37.420348 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:37 crc kubenswrapper[4705]: E0216 15:56:37.421249 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:43 crc kubenswrapper[4705]: E0216 15:56:43.420791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:46 crc kubenswrapper[4705]: E0216 15:56:46.432087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:56:48 crc kubenswrapper[4705]: I0216 15:56:48.420193 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:56:48 crc kubenswrapper[4705]: E0216 15:56:48.421330 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:56:56 crc kubenswrapper[4705]: E0216 15:56:56.427889 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:56:59 crc kubenswrapper[4705]: E0216 15:56:59.423469 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:00 crc kubenswrapper[4705]: I0216 15:57:00.420662 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:00 crc kubenswrapper[4705]: E0216 15:57:00.421356 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:07 crc kubenswrapper[4705]: E0216 15:57:07.423061 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:13 crc kubenswrapper[4705]: E0216 15:57:13.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:15 crc kubenswrapper[4705]: I0216 15:57:15.421041 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:15 crc kubenswrapper[4705]: E0216 15:57:15.421667 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:22 crc kubenswrapper[4705]: E0216 15:57:22.423801 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:24 crc kubenswrapper[4705]: E0216 15:57:24.423823 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:26 crc kubenswrapper[4705]: I0216 15:57:26.435643 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:26 crc kubenswrapper[4705]: E0216 15:57:26.436641 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:36 crc kubenswrapper[4705]: E0216 15:57:36.433299 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.714471 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.721139 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.728973 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.806971 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.807110 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.807151 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.909801 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.909987 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910081 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910428 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.910526 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:36 crc kubenswrapper[4705]: I0216 15:57:36.936572 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"community-operators-jrzlm\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.050808 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.651200 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:37 crc kubenswrapper[4705]: I0216 15:57:37.877659 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be"} Feb 16 15:57:38 crc kubenswrapper[4705]: E0216 15:57:38.421288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:38 crc kubenswrapper[4705]: I0216 15:57:38.889068 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da" exitCode=0 Feb 16 15:57:38 crc kubenswrapper[4705]: I0216 15:57:38.889118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da"} Feb 16 15:57:39 crc kubenswrapper[4705]: I0216 15:57:39.903298 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56"} Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.419819 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:41 crc kubenswrapper[4705]: E0216 15:57:41.420497 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.927960 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56" exitCode=0 Feb 16 15:57:41 crc kubenswrapper[4705]: I0216 15:57:41.928039 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56"} Feb 16 15:57:42 crc kubenswrapper[4705]: I0216 15:57:42.952731 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerStarted","Data":"433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504"} Feb 16 15:57:42 crc kubenswrapper[4705]: I0216 15:57:42.975707 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrzlm" podStartSLOduration=3.498798971 podStartE2EDuration="6.975689661s" podCreationTimestamp="2026-02-16 15:57:36 +0000 UTC" firstStartedPulling="2026-02-16 15:57:38.891602716 +0000 UTC m=+3853.076579792" lastFinishedPulling="2026-02-16 15:57:42.368493406 +0000 UTC m=+3856.553470482" observedRunningTime="2026-02-16 15:57:42.975204787 +0000 UTC m=+3857.160181893" watchObservedRunningTime="2026-02-16 15:57:42.975689661 +0000 UTC m=+3857.160666737" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.052331 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.052696 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:47 crc kubenswrapper[4705]: I0216 15:57:47.103973 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:48 crc kubenswrapper[4705]: I0216 15:57:48.089550 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:48 crc kubenswrapper[4705]: I0216 15:57:48.163209 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:50 crc kubenswrapper[4705]: I0216 15:57:50.036834 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrzlm" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" containerID="cri-o://433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" gracePeriod=2 Feb 16 15:57:50 crc kubenswrapper[4705]: E0216 15:57:50.421668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.051924 4705 generic.go:334] "Generic (PLEG): container finished" podID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerID="433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" exitCode=0 Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.051973 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504"} Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.052047 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrzlm" event={"ID":"82137727-e2d9-404a-9a97-f6a02ee6f25f","Type":"ContainerDied","Data":"4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be"} Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.052064 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4407bb3b1208fbab2924b1263c6c790691f41d878701ff18821df9bed5c5b5be" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.147579 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.214653 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.215308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.215354 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") pod \"82137727-e2d9-404a-9a97-f6a02ee6f25f\" (UID: \"82137727-e2d9-404a-9a97-f6a02ee6f25f\") " Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.217183 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities" (OuterVolumeSpecName: "utilities") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.232728 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf" (OuterVolumeSpecName: "kube-api-access-z7mqf") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "kube-api-access-z7mqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.293581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82137727-e2d9-404a-9a97-f6a02ee6f25f" (UID: "82137727-e2d9-404a-9a97-f6a02ee6f25f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318133 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7mqf\" (UniqueName: \"kubernetes.io/projected/82137727-e2d9-404a-9a97-f6a02ee6f25f-kube-api-access-z7mqf\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318164 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: I0216 15:57:51.318174 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82137727-e2d9-404a-9a97-f6a02ee6f25f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:57:51 crc kubenswrapper[4705]: E0216 15:57:51.421268 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.063320 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrzlm" Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.109410 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.122681 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrzlm"] Feb 16 15:57:52 crc kubenswrapper[4705]: I0216 15:57:52.469222 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" path="/var/lib/kubelet/pods/82137727-e2d9-404a-9a97-f6a02ee6f25f/volumes" Feb 16 15:57:53 crc kubenswrapper[4705]: I0216 15:57:53.420789 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:57:53 crc kubenswrapper[4705]: E0216 15:57:53.421364 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:04 crc kubenswrapper[4705]: I0216 15:58:04.419932 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:04 crc kubenswrapper[4705]: E0216 15:58:04.420866 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:04 crc kubenswrapper[4705]: E0216 15:58:04.421959 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:05 crc kubenswrapper[4705]: E0216 15:58:05.422102 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:14 crc kubenswrapper[4705]: I0216 15:58:14.378249 4705 generic.go:334] "Generic (PLEG): container finished" podID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerID="e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98" exitCode=2 Feb 16 15:58:14 crc kubenswrapper[4705]: I0216 15:58:14.378356 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerDied","Data":"e4aefe0d3bc6b447e40b188d95e9547cb87edeaef2a29ac55cc4d26271d01d98"} Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.006718 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078693 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078868 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.078931 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") pod \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\" (UID: \"49d4643c-71ab-4c0f-b3cb-0f494971aa6e\") " Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.087877 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977" (OuterVolumeSpecName: "kube-api-access-5r977") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "kube-api-access-5r977". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.109538 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.123781 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory" (OuterVolumeSpecName: "inventory") pod "49d4643c-71ab-4c0f-b3cb-0f494971aa6e" (UID: "49d4643c-71ab-4c0f-b3cb-0f494971aa6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.181939 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.182022 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r977\" (UniqueName: \"kubernetes.io/projected/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-kube-api-access-5r977\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.182081 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49d4643c-71ab-4c0f-b3cb-0f494971aa6e-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399180 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" event={"ID":"49d4643c-71ab-4c0f-b3cb-0f494971aa6e","Type":"ContainerDied","Data":"7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c"} Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399234 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0a2e37aabc9be4da171f1a7589105d521f79b0a14feba542fbc144bbbfd51c" Feb 16 15:58:16 crc kubenswrapper[4705]: I0216 15:58:16.399241 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49hkn" Feb 16 15:58:16 crc kubenswrapper[4705]: E0216 15:58:16.421394 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:16 crc kubenswrapper[4705]: E0216 15:58:16.421970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:19 crc kubenswrapper[4705]: I0216 15:58:19.419782 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:19 crc kubenswrapper[4705]: E0216 15:58:19.420668 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:27 crc kubenswrapper[4705]: E0216 15:58:27.422565 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:31 crc kubenswrapper[4705]: I0216 15:58:31.420239 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:31 crc kubenswrapper[4705]: E0216 15:58:31.420824 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:31 crc kubenswrapper[4705]: E0216 15:58:31.422214 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:39 crc kubenswrapper[4705]: E0216 15:58:39.422203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:44 crc kubenswrapper[4705]: E0216 15:58:44.421970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:45 crc kubenswrapper[4705]: I0216 15:58:45.419762 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:45 crc kubenswrapper[4705]: E0216 15:58:45.420078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:58:50 crc kubenswrapper[4705]: E0216 15:58:50.423568 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:58:55 crc kubenswrapper[4705]: E0216 15:58:55.423181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:58:56 crc kubenswrapper[4705]: I0216 15:58:56.432011 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:58:56 crc kubenswrapper[4705]: E0216 15:58:56.432542 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:04 crc kubenswrapper[4705]: E0216 15:59:04.423790 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:06 crc kubenswrapper[4705]: E0216 15:59:06.434135 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:08 crc kubenswrapper[4705]: I0216 15:59:08.419587 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:08 crc kubenswrapper[4705]: E0216 15:59:08.420636 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:18 crc kubenswrapper[4705]: E0216 15:59:18.422575 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:19 crc kubenswrapper[4705]: E0216 15:59:19.424944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:23 crc kubenswrapper[4705]: I0216 15:59:23.419909 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:23 crc kubenswrapper[4705]: E0216 15:59:23.420929 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:33 crc kubenswrapper[4705]: E0216 15:59:33.422594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:33 crc kubenswrapper[4705]: E0216 15:59:33.422622 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:34 crc kubenswrapper[4705]: I0216 15:59:34.419270 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:34 crc kubenswrapper[4705]: E0216 15:59:34.419937 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:46 crc kubenswrapper[4705]: E0216 15:59:46.429984 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 15:59:46 crc kubenswrapper[4705]: E0216 15:59:46.430198 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 15:59:48 crc kubenswrapper[4705]: I0216 15:59:48.420002 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 15:59:48 crc kubenswrapper[4705]: E0216 15:59:48.420809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 15:59:59 crc kubenswrapper[4705]: E0216 15:59:59.422525 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.189006 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190254 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190415 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190542 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-utilities" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190627 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-utilities" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190727 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-content" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.190802 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="extract-content" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.190927 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191013 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191430 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d4643c-71ab-4c0f-b3cb-0f494971aa6e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.191598 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="82137727-e2d9-404a-9a97-f6a02ee6f25f" containerName="registry-server" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.192943 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.224344 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.225740 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.270921 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.271156 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.271198 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375124 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375293 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.375320 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.390791 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.732704 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.732963 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"collect-profiles-29520960-67vqh\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: I0216 16:00:00.739884 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:00 crc kubenswrapper[4705]: E0216 16:00:00.744109 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.003111 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.445643 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh"] Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.789490 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerStarted","Data":"566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671"} Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.789951 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerStarted","Data":"d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5"} Feb 16 16:00:01 crc kubenswrapper[4705]: I0216 16:00:01.825057 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" podStartSLOduration=1.825035008 podStartE2EDuration="1.825035008s" podCreationTimestamp="2026-02-16 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:00:01.813256874 +0000 UTC m=+3995.998233950" watchObservedRunningTime="2026-02-16 16:00:01.825035008 +0000 UTC m=+3996.010012084" Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.419586 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:02 crc kubenswrapper[4705]: E0216 16:00:02.419869 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.810192 4705 generic.go:334] "Generic (PLEG): container finished" podID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerID="566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671" exitCode=0 Feb 16 16:00:02 crc kubenswrapper[4705]: I0216 16:00:02.810277 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerDied","Data":"566e4f974460b487e11ecfac752a62b246bf02aa88627b3d38ce65ecb5933671"} Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.293183 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427093 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427615 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.427715 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") pod \"27a4e901-f4ae-4bc7-b818-5b98c0024653\" (UID: \"27a4e901-f4ae-4bc7-b818-5b98c0024653\") " Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.428611 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume" (OuterVolumeSpecName: "config-volume") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.443681 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.443746 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h" (OuterVolumeSpecName: "kube-api-access-nxr2h") pod "27a4e901-f4ae-4bc7-b818-5b98c0024653" (UID: "27a4e901-f4ae-4bc7-b818-5b98c0024653"). InnerVolumeSpecName "kube-api-access-nxr2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.521576 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531466 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxr2h\" (UniqueName: \"kubernetes.io/projected/27a4e901-f4ae-4bc7-b818-5b98c0024653-kube-api-access-nxr2h\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531508 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27a4e901-f4ae-4bc7-b818-5b98c0024653-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.531519 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a4e901-f4ae-4bc7-b818-5b98c0024653-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.534403 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-lwjnm"] Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831780 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" event={"ID":"27a4e901-f4ae-4bc7-b818-5b98c0024653","Type":"ContainerDied","Data":"d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5"} Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831854 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-67vqh" Feb 16 16:00:04 crc kubenswrapper[4705]: I0216 16:00:04.831859 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e3ba4203d8c7168153a0b9a691012aa0aff30678359b66587411b53dcfb3f5" Feb 16 16:00:06 crc kubenswrapper[4705]: I0216 16:00:06.434133 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6f056a-614c-4e3d-9bfe-de451b1d951d" path="/var/lib/kubelet/pods/4c6f056a-614c-4e3d-9bfe-de451b1d951d/volumes" Feb 16 16:00:12 crc kubenswrapper[4705]: E0216 16:00:12.421650 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:13 crc kubenswrapper[4705]: E0216 16:00:13.422318 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:16 crc kubenswrapper[4705]: I0216 16:00:16.428405 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:16 crc kubenswrapper[4705]: E0216 16:00:16.429173 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:25 crc kubenswrapper[4705]: E0216 16:00:25.422158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:27 crc kubenswrapper[4705]: E0216 16:00:27.421282 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:31 crc kubenswrapper[4705]: I0216 16:00:31.420443 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:31 crc kubenswrapper[4705]: E0216 16:00:31.421227 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:00:35 crc kubenswrapper[4705]: I0216 16:00:35.666184 4705 scope.go:117] "RemoveContainer" containerID="12cac5303820f9f4b9790cf3756c563cd44a6389204cd476bba276cfd10f485f" Feb 16 16:00:39 crc kubenswrapper[4705]: E0216 16:00:39.422407 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:41 crc kubenswrapper[4705]: E0216 16:00:41.423084 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:42 crc kubenswrapper[4705]: I0216 16:00:42.420189 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:00:43 crc kubenswrapper[4705]: I0216 16:00:43.310223 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} Feb 16 16:00:52 crc kubenswrapper[4705]: E0216 16:00:52.422544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042044 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: E0216 16:00:53.042585 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042606 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.042856 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a4e901-f4ae-4bc7-b818-5b98c0024653" containerName="collect-profiles" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.043936 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.047161 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.047677 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.048844 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.050067 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.067669 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.236376 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.236942 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.237117 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.338939 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.339048 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.339179 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.346187 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.347971 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.357698 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.368771 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.939987 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6"] Feb 16 16:00:53 crc kubenswrapper[4705]: I0216 16:00:53.952951 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:00:54 crc kubenswrapper[4705]: E0216 16:00:54.421017 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:00:54 crc kubenswrapper[4705]: I0216 16:00:54.446195 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerStarted","Data":"f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e"} Feb 16 16:00:55 crc kubenswrapper[4705]: I0216 16:00:55.457641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerStarted","Data":"7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b"} Feb 16 16:00:55 crc kubenswrapper[4705]: I0216 16:00:55.481353 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" podStartSLOduration=1.8201837360000002 podStartE2EDuration="2.481325226s" podCreationTimestamp="2026-02-16 16:00:53 +0000 UTC" firstStartedPulling="2026-02-16 16:00:53.952630803 +0000 UTC m=+4048.137607889" lastFinishedPulling="2026-02-16 16:00:54.613772303 +0000 UTC m=+4048.798749379" observedRunningTime="2026-02-16 16:00:55.475171412 +0000 UTC m=+4049.660148488" watchObservedRunningTime="2026-02-16 16:00:55.481325226 +0000 UTC m=+4049.666302302" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.162252 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.165130 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.178161 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.334790 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.335958 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.336029 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.336498 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441671 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441793 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.441892 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.442023 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.832840 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.832941 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.834185 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:00 crc kubenswrapper[4705]: I0216 16:01:00.836468 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"keystone-cron-29520961-75mxg\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:01 crc kubenswrapper[4705]: I0216 16:01:01.091441 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:01 crc kubenswrapper[4705]: I0216 16:01:01.587135 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-75mxg"] Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.540546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerStarted","Data":"a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a"} Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.540883 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerStarted","Data":"f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811"} Feb 16 16:01:02 crc kubenswrapper[4705]: I0216 16:01:02.560915 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520961-75mxg" podStartSLOduration=2.560893866 podStartE2EDuration="2.560893866s" podCreationTimestamp="2026-02-16 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:01:02.555347039 +0000 UTC m=+4056.740324115" watchObservedRunningTime="2026-02-16 16:01:02.560893866 +0000 UTC m=+4056.745870942" Feb 16 16:01:05 crc kubenswrapper[4705]: I0216 16:01:05.577131 4705 generic.go:334] "Generic (PLEG): container finished" podID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerID="a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a" exitCode=0 Feb 16 16:01:05 crc kubenswrapper[4705]: I0216 16:01:05.577185 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerDied","Data":"a39d9b6ccfe88ad9e7294574b4ac279e3a3d9de4fb645305b04c16257ab0726a"} Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.564760 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.565072 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.565225 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:01:06 crc kubenswrapper[4705]: E0216 16:01:06.566434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.043539 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.223971 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224097 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224121 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.224241 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") pod \"98bca645-7f96-4667-adb9-cf4c5002ba78\" (UID: \"98bca645-7f96-4667-adb9-cf4c5002ba78\") " Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.230871 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.236529 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm" (OuterVolumeSpecName: "kube-api-access-bs8dm") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "kube-api-access-bs8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.267937 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.319506 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data" (OuterVolumeSpecName: "config-data") pod "98bca645-7f96-4667-adb9-cf4c5002ba78" (UID: "98bca645-7f96-4667-adb9-cf4c5002ba78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330037 4705 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330100 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs8dm\" (UniqueName: \"kubernetes.io/projected/98bca645-7f96-4667-adb9-cf4c5002ba78-kube-api-access-bs8dm\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330118 4705 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.330135 4705 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98bca645-7f96-4667-adb9-cf4c5002ba78-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606294 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-75mxg" event={"ID":"98bca645-7f96-4667-adb9-cf4c5002ba78","Type":"ContainerDied","Data":"f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811"} Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606341 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f901503dffd6c6aa6435c4b73cc4fb63e002513cbb057cd43d9905bbebca9811" Feb 16 16:01:07 crc kubenswrapper[4705]: I0216 16:01:07.606490 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-75mxg" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.548556 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.549208 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.549442 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:01:09 crc kubenswrapper[4705]: E0216 16:01:09.550931 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:18 crc kubenswrapper[4705]: E0216 16:01:18.422345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:22 crc kubenswrapper[4705]: E0216 16:01:22.432316 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.749798 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:23 crc kubenswrapper[4705]: E0216 16:01:23.750792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.750811 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.751118 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bca645-7f96-4667-adb9-cf4c5002ba78" containerName="keystone-cron" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.753460 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764847 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764923 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764932 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.764994 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867452 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867533 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867581 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867973 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.867988 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:23 crc kubenswrapper[4705]: I0216 16:01:23.890306 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"redhat-operators-5fgwc\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.107058 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.623896 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:01:24 crc kubenswrapper[4705]: I0216 16:01:24.814916 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"6764853331df6a6460f33d1474eb9cab471934aabdc993a1e48b65054f9958a8"} Feb 16 16:01:25 crc kubenswrapper[4705]: I0216 16:01:25.828438 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" exitCode=0 Feb 16 16:01:25 crc kubenswrapper[4705]: I0216 16:01:25.828641 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0"} Feb 16 16:01:27 crc kubenswrapper[4705]: I0216 16:01:27.857574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} Feb 16 16:01:31 crc kubenswrapper[4705]: E0216 16:01:31.423851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:32 crc kubenswrapper[4705]: I0216 16:01:32.920758 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" exitCode=0 Feb 16 16:01:32 crc kubenswrapper[4705]: I0216 16:01:32.920805 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} Feb 16 16:01:33 crc kubenswrapper[4705]: I0216 16:01:33.935134 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerStarted","Data":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} Feb 16 16:01:33 crc kubenswrapper[4705]: I0216 16:01:33.968856 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5fgwc" podStartSLOduration=3.461783887 podStartE2EDuration="10.968833334s" podCreationTimestamp="2026-02-16 16:01:23 +0000 UTC" firstStartedPulling="2026-02-16 16:01:25.830497191 +0000 UTC m=+4080.015474267" lastFinishedPulling="2026-02-16 16:01:33.337546638 +0000 UTC m=+4087.522523714" observedRunningTime="2026-02-16 16:01:33.966529498 +0000 UTC m=+4088.151506614" watchObservedRunningTime="2026-02-16 16:01:33.968833334 +0000 UTC m=+4088.153810400" Feb 16 16:01:34 crc kubenswrapper[4705]: I0216 16:01:34.107716 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:34 crc kubenswrapper[4705]: I0216 16:01:34.107766 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:01:34 crc kubenswrapper[4705]: E0216 16:01:34.423938 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:35 crc kubenswrapper[4705]: I0216 16:01:35.184519 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:35 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:35 crc kubenswrapper[4705]: > Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.926196 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.929315 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.946279 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.973830 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.973899 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:40 crc kubenswrapper[4705]: I0216 16:01:40.974116 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077407 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077701 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077753 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.077906 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.078076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.130225 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"certified-operators-6pbhc\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.257570 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:41 crc kubenswrapper[4705]: I0216 16:01:41.847743 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:01:42 crc kubenswrapper[4705]: I0216 16:01:42.066650 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670"} Feb 16 16:01:42 crc kubenswrapper[4705]: E0216 16:01:42.422319 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:01:43 crc kubenswrapper[4705]: I0216 16:01:43.081252 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72" exitCode=0 Feb 16 16:01:43 crc kubenswrapper[4705]: I0216 16:01:43.081350 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72"} Feb 16 16:01:44 crc kubenswrapper[4705]: I0216 16:01:44.109430 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673"} Feb 16 16:01:45 crc kubenswrapper[4705]: I0216 16:01:45.169932 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:45 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:45 crc kubenswrapper[4705]: > Feb 16 16:01:45 crc kubenswrapper[4705]: E0216 16:01:45.422332 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:01:47 crc kubenswrapper[4705]: I0216 16:01:47.141116 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673" exitCode=0 Feb 16 16:01:47 crc kubenswrapper[4705]: I0216 16:01:47.141179 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673"} Feb 16 16:01:48 crc kubenswrapper[4705]: I0216 16:01:48.154728 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerStarted","Data":"1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081"} Feb 16 16:01:48 crc kubenswrapper[4705]: I0216 16:01:48.186347 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pbhc" podStartSLOduration=3.714002632 podStartE2EDuration="8.186323325s" podCreationTimestamp="2026-02-16 16:01:40 +0000 UTC" firstStartedPulling="2026-02-16 16:01:43.084565269 +0000 UTC m=+4097.269542345" lastFinishedPulling="2026-02-16 16:01:47.556885962 +0000 UTC m=+4101.741863038" observedRunningTime="2026-02-16 16:01:48.176716333 +0000 UTC m=+4102.361693409" watchObservedRunningTime="2026-02-16 16:01:48.186323325 +0000 UTC m=+4102.371300411" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.258425 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.259041 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:51 crc kubenswrapper[4705]: I0216 16:01:51.872946 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:01:55 crc kubenswrapper[4705]: I0216 16:01:55.163375 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" probeResult="failure" output=< Feb 16 16:01:55 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:01:55 crc kubenswrapper[4705]: > Feb 16 16:01:56 crc kubenswrapper[4705]: E0216 16:01:56.435119 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:00 crc kubenswrapper[4705]: E0216 16:02:00.424445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:01 crc kubenswrapper[4705]: I0216 16:02:01.308561 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:03 crc kubenswrapper[4705]: I0216 16:02:03.966154 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:03 crc kubenswrapper[4705]: I0216 16:02:03.966838 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pbhc" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" containerID="cri-o://1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" gracePeriod=2 Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.195278 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.257271 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554546 4705 generic.go:334] "Generic (PLEG): container finished" podID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerID="1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" exitCode=0 Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554642 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081"} Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554943 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pbhc" event={"ID":"170bdaa1-dc08-4282-955b-debf707fd9f1","Type":"ContainerDied","Data":"d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670"} Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.554978 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3cbf362788ecca0b02f3b4fcb46b0e4f0ad609ca73ee1c2df2ee5804e7a4670" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.611713 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742077 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742204 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742298 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") pod \"170bdaa1-dc08-4282-955b-debf707fd9f1\" (UID: \"170bdaa1-dc08-4282-955b-debf707fd9f1\") " Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.742881 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities" (OuterVolumeSpecName: "utilities") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.744495 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.748180 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs" (OuterVolumeSpecName: "kube-api-access-ffrhs") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "kube-api-access-ffrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.795160 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "170bdaa1-dc08-4282-955b-debf707fd9f1" (UID: "170bdaa1-dc08-4282-955b-debf707fd9f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.846334 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170bdaa1-dc08-4282-955b-debf707fd9f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:04 crc kubenswrapper[4705]: I0216 16:02:04.846658 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffrhs\" (UniqueName: \"kubernetes.io/projected/170bdaa1-dc08-4282-955b-debf707fd9f1-kube-api-access-ffrhs\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.564515 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pbhc" Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.603017 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:05 crc kubenswrapper[4705]: I0216 16:02:05.613243 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pbhc"] Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.437938 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" path="/var/lib/kubelet/pods/170bdaa1-dc08-4282-955b-debf707fd9f1/volumes" Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.556533 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:06 crc kubenswrapper[4705]: I0216 16:02:06.557024 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5fgwc" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" containerID="cri-o://787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" gracePeriod=2 Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.097648 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212713 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212834 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.212970 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") pod \"45a762e5-ea54-48f8-855c-71726ce18208\" (UID: \"45a762e5-ea54-48f8-855c-71726ce18208\") " Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.214654 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities" (OuterVolumeSpecName: "utilities") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.230124 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb" (OuterVolumeSpecName: "kube-api-access-rrtgb") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "kube-api-access-rrtgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.318081 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrtgb\" (UniqueName: \"kubernetes.io/projected/45a762e5-ea54-48f8-855c-71726ce18208-kube-api-access-rrtgb\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.318135 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.372184 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a762e5-ea54-48f8-855c-71726ce18208" (UID: "45a762e5-ea54-48f8-855c-71726ce18208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.420300 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a762e5-ea54-48f8-855c-71726ce18208-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590416 4705 generic.go:334] "Generic (PLEG): container finished" podID="45a762e5-ea54-48f8-855c-71726ce18208" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" exitCode=0 Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590478 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5fgwc" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.590498 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.591100 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5fgwc" event={"ID":"45a762e5-ea54-48f8-855c-71726ce18208","Type":"ContainerDied","Data":"6764853331df6a6460f33d1474eb9cab471934aabdc993a1e48b65054f9958a8"} Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.591181 4705 scope.go:117] "RemoveContainer" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.621384 4705 scope.go:117] "RemoveContainer" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.625483 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.638148 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5fgwc"] Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.648420 4705 scope.go:117] "RemoveContainer" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704291 4705 scope.go:117] "RemoveContainer" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.704920 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": container with ID starting with 787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6 not found: ID does not exist" containerID="787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704951 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6"} err="failed to get container status \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": rpc error: code = NotFound desc = could not find container \"787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6\": container with ID starting with 787c8d8d742c577517def5a8a4a715e10b526b6e5f406a4524b301cb5bdacdd6 not found: ID does not exist" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.704974 4705 scope.go:117] "RemoveContainer" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.705325 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": container with ID starting with 1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98 not found: ID does not exist" containerID="1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705354 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98"} err="failed to get container status \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": rpc error: code = NotFound desc = could not find container \"1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98\": container with ID starting with 1bdd2b07f50af959756f8043ecbb97ffa30b823b8c35bc2dff8277aafb0e7b98 not found: ID does not exist" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705415 4705 scope.go:117] "RemoveContainer" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: E0216 16:02:07.705703 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": container with ID starting with 3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0 not found: ID does not exist" containerID="3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0" Feb 16 16:02:07 crc kubenswrapper[4705]: I0216 16:02:07.705736 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0"} err="failed to get container status \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": rpc error: code = NotFound desc = could not find container \"3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0\": container with ID starting with 3c67b2c287ddfc00c16a1582fe42dcc5aa082ce9220f6102a7076fcdd7d68ab0 not found: ID does not exist" Feb 16 16:02:08 crc kubenswrapper[4705]: I0216 16:02:08.432411 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a762e5-ea54-48f8-855c-71726ce18208" path="/var/lib/kubelet/pods/45a762e5-ea54-48f8-855c-71726ce18208/volumes" Feb 16 16:02:11 crc kubenswrapper[4705]: E0216 16:02:11.423400 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:14 crc kubenswrapper[4705]: E0216 16:02:14.421710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.431020 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.431726 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.558898 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559666 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559689 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559709 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559719 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-utilities" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559734 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559745 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559787 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559796 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559832 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559841 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="extract-content" Feb 16 16:02:26 crc kubenswrapper[4705]: E0216 16:02:26.559884 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.559895 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.560200 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a762e5-ea54-48f8-855c-71726ce18208" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.560225 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="170bdaa1-dc08-4282-955b-debf707fd9f1" containerName="registry-server" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.562649 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.569286 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.751867 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.752458 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.752670 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.855349 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.855918 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856065 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856569 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.856827 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.877453 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"redhat-marketplace-ngq5z\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:26 crc kubenswrapper[4705]: I0216 16:02:26.906020 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:27 crc kubenswrapper[4705]: I0216 16:02:27.421590 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:27 crc kubenswrapper[4705]: I0216 16:02:27.881574 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"f394747612343fb22c3e0f1891ddb10d8664c98075ae50493bdf58ba26dfbcb6"} Feb 16 16:02:28 crc kubenswrapper[4705]: I0216 16:02:28.914193 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" exitCode=0 Feb 16 16:02:28 crc kubenswrapper[4705]: I0216 16:02:28.914692 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b"} Feb 16 16:02:30 crc kubenswrapper[4705]: I0216 16:02:30.944225 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} Feb 16 16:02:32 crc kubenswrapper[4705]: I0216 16:02:32.976664 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" exitCode=0 Feb 16 16:02:32 crc kubenswrapper[4705]: I0216 16:02:32.976748 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} Feb 16 16:02:33 crc kubenswrapper[4705]: I0216 16:02:33.994063 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerStarted","Data":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} Feb 16 16:02:34 crc kubenswrapper[4705]: I0216 16:02:34.026200 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngq5z" podStartSLOduration=3.502986609 podStartE2EDuration="8.02617476s" podCreationTimestamp="2026-02-16 16:02:26 +0000 UTC" firstStartedPulling="2026-02-16 16:02:28.917842159 +0000 UTC m=+4143.102819235" lastFinishedPulling="2026-02-16 16:02:33.44103031 +0000 UTC m=+4147.626007386" observedRunningTime="2026-02-16 16:02:34.020805849 +0000 UTC m=+4148.205782915" watchObservedRunningTime="2026-02-16 16:02:34.02617476 +0000 UTC m=+4148.211151836" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.907419 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.908005 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:36 crc kubenswrapper[4705]: I0216 16:02:36.959474 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:41 crc kubenswrapper[4705]: E0216 16:02:41.423007 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:41 crc kubenswrapper[4705]: E0216 16:02:41.423060 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:02:46 crc kubenswrapper[4705]: I0216 16:02:46.963457 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.022802 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.118710 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngq5z" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" containerID="cri-o://d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" gracePeriod=2 Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.683096 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.761833 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.767634 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.767829 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") pod \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\" (UID: \"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3\") " Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.769112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities" (OuterVolumeSpecName: "utilities") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.773541 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn" (OuterVolumeSpecName: "kube-api-access-sjrcn") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "kube-api-access-sjrcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.791812 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" (UID: "d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871939 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871976 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjrcn\" (UniqueName: \"kubernetes.io/projected/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-kube-api-access-sjrcn\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:47 crc kubenswrapper[4705]: I0216 16:02:47.871990 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132426 4705 generic.go:334] "Generic (PLEG): container finished" podID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" exitCode=0 Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132498 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngq5z" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132499 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.132982 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngq5z" event={"ID":"d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3","Type":"ContainerDied","Data":"f394747612343fb22c3e0f1891ddb10d8664c98075ae50493bdf58ba26dfbcb6"} Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.133032 4705 scope.go:117] "RemoveContainer" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.185603 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.187877 4705 scope.go:117] "RemoveContainer" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.200072 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngq5z"] Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.216943 4705 scope.go:117] "RemoveContainer" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.446512 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" path="/var/lib/kubelet/pods/d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3/volumes" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.944584 4705 scope.go:117] "RemoveContainer" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.945853 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": container with ID starting with d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd not found: ID does not exist" containerID="d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946068 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd"} err="failed to get container status \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": rpc error: code = NotFound desc = could not find container \"d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd\": container with ID starting with d9309c2cb70d960122f1d4c2d3dac859861e23b4682f072d1bf4961c2d1e62dd not found: ID does not exist" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946213 4705 scope.go:117] "RemoveContainer" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.946866 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": container with ID starting with b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd not found: ID does not exist" containerID="b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946905 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd"} err="failed to get container status \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": rpc error: code = NotFound desc = could not find container \"b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd\": container with ID starting with b534dcde687339544102c6b2f4c1a612a8cda265cca9853cae0dd1d9d59e76fd not found: ID does not exist" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.946923 4705 scope.go:117] "RemoveContainer" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: E0216 16:02:48.947265 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": container with ID starting with 3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b not found: ID does not exist" containerID="3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b" Feb 16 16:02:48 crc kubenswrapper[4705]: I0216 16:02:48.947416 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b"} err="failed to get container status \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": rpc error: code = NotFound desc = could not find container \"3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b\": container with ID starting with 3bef552a3723889ba72b65d73d1a5432915e53640a475b07f8e21c2d7aeca78b not found: ID does not exist" Feb 16 16:02:56 crc kubenswrapper[4705]: E0216 16:02:56.428209 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:02:56 crc kubenswrapper[4705]: E0216 16:02:56.428225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:01 crc kubenswrapper[4705]: I0216 16:03:01.684128 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:03:01 crc kubenswrapper[4705]: I0216 16:03:01.684596 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:03:09 crc kubenswrapper[4705]: E0216 16:03:09.422493 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:11 crc kubenswrapper[4705]: E0216 16:03:11.421625 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:22 crc kubenswrapper[4705]: E0216 16:03:22.421776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:23 crc kubenswrapper[4705]: E0216 16:03:23.422203 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:31 crc kubenswrapper[4705]: I0216 16:03:31.686949 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:03:31 crc kubenswrapper[4705]: I0216 16:03:31.687466 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:03:35 crc kubenswrapper[4705]: E0216 16:03:35.424435 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:37 crc kubenswrapper[4705]: E0216 16:03:37.421845 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:46 crc kubenswrapper[4705]: E0216 16:03:46.429807 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:03:48 crc kubenswrapper[4705]: E0216 16:03:48.422013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:03:59 crc kubenswrapper[4705]: E0216 16:03:59.422675 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:01 crc kubenswrapper[4705]: E0216 16:04:01.422126 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684010 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684076 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.684124 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.685136 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.685214 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" gracePeriod=600 Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.957749 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" exitCode=0 Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.957796 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9"} Feb 16 16:04:01 crc kubenswrapper[4705]: I0216 16:04:01.958143 4705 scope.go:117] "RemoveContainer" containerID="9c54eb84137a52182d7e485ae3994e62474b62a488137d197501a8e4fee07dc1" Feb 16 16:04:02 crc kubenswrapper[4705]: I0216 16:04:02.971146 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} Feb 16 16:04:11 crc kubenswrapper[4705]: E0216 16:04:11.422146 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:13 crc kubenswrapper[4705]: E0216 16:04:13.422475 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:24 crc kubenswrapper[4705]: E0216 16:04:24.421640 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:26 crc kubenswrapper[4705]: E0216 16:04:26.438721 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.815857 4705 scope.go:117] "RemoveContainer" containerID="433ea4059d33ffe36aae8decc88f406f808260d8a5f1bd117e4b591424321504" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.850723 4705 scope.go:117] "RemoveContainer" containerID="fcdb2d6e6be0d768bddbedb97937147e4b45a055a895a05093067235aae58d56" Feb 16 16:04:35 crc kubenswrapper[4705]: I0216 16:04:35.875298 4705 scope.go:117] "RemoveContainer" containerID="42ce4e0addaeffaf331f978bfecd58e49daffbcd26474b8a5e6259c4e372d5da" Feb 16 16:04:39 crc kubenswrapper[4705]: E0216 16:04:39.421930 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:04:39 crc kubenswrapper[4705]: E0216 16:04:39.422094 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:51 crc kubenswrapper[4705]: E0216 16:04:51.422039 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:04:53 crc kubenswrapper[4705]: E0216 16:04:53.421772 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:02 crc kubenswrapper[4705]: E0216 16:05:02.421226 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:06 crc kubenswrapper[4705]: E0216 16:05:06.435807 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:15 crc kubenswrapper[4705]: E0216 16:05:15.428285 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:19 crc kubenswrapper[4705]: E0216 16:05:19.423642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:26 crc kubenswrapper[4705]: E0216 16:05:26.431582 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:33 crc kubenswrapper[4705]: E0216 16:05:33.422852 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:39 crc kubenswrapper[4705]: E0216 16:05:39.421572 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:46 crc kubenswrapper[4705]: E0216 16:05:46.431208 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:05:52 crc kubenswrapper[4705]: E0216 16:05:52.421888 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:05:59 crc kubenswrapper[4705]: E0216 16:05:59.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:07 crc kubenswrapper[4705]: I0216 16:06:07.423335 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555628 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555699 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.555898 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:06:07 crc kubenswrapper[4705]: E0216 16:06:07.557139 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.515773 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.516279 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.516457 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:06:12 crc kubenswrapper[4705]: E0216 16:06:12.517685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:21 crc kubenswrapper[4705]: E0216 16:06:21.423228 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:24 crc kubenswrapper[4705]: E0216 16:06:24.422681 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:31 crc kubenswrapper[4705]: I0216 16:06:31.684423 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:06:31 crc kubenswrapper[4705]: I0216 16:06:31.685070 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:06:35 crc kubenswrapper[4705]: E0216 16:06:35.425158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:36 crc kubenswrapper[4705]: E0216 16:06:36.429963 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:47 crc kubenswrapper[4705]: E0216 16:06:47.425022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:06:47 crc kubenswrapper[4705]: E0216 16:06:47.425049 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:06:58 crc kubenswrapper[4705]: E0216 16:06:58.424961 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:00 crc kubenswrapper[4705]: E0216 16:07:00.914239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:01 crc kubenswrapper[4705]: I0216 16:07:01.684976 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:07:01 crc kubenswrapper[4705]: I0216 16:07:01.685397 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:07:09 crc kubenswrapper[4705]: E0216 16:07:09.420243 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:11 crc kubenswrapper[4705]: I0216 16:07:11.165497 4705 generic.go:334] "Generic (PLEG): container finished" podID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerID="7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b" exitCode=2 Feb 16 16:07:11 crc kubenswrapper[4705]: I0216 16:07:11.165546 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerDied","Data":"7945d6ad7374ab3b23b668ea795bd7af5c36b315c187c0f9f1d7dca19352746b"} Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.640232 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.775906 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.777141 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:12 crc kubenswrapper[4705]: I0216 16:07:12.779656 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") pod \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\" (UID: \"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c\") " Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186679 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" event={"ID":"896e8ac5-e84c-41d6-a6e5-638c9b5cae1c","Type":"ContainerDied","Data":"f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e"} Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186738 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.186814 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.469662 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8" (OuterVolumeSpecName: "kube-api-access-687w8") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "kube-api-access-687w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.503358 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-687w8\" (UniqueName: \"kubernetes.io/projected/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-kube-api-access-687w8\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.632578 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.632972 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory" (OuterVolumeSpecName: "inventory") pod "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" (UID: "896e8ac5-e84c-41d6-a6e5-638c9b5cae1c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.709299 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: I0216 16:07:13.709341 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/896e8ac5-e84c-41d6-a6e5-638c9b5cae1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 16:07:13 crc kubenswrapper[4705]: E0216 16:07:13.950322 4705 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896e8ac5_e84c_41d6_a6e5_638c9b5cae1c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896e8ac5_e84c_41d6_a6e5_638c9b5cae1c.slice/crio-f200efbd485249ddfdf83b40b40f349bd03520224bed729f92b3d095ed0ae82e\": RecentStats: unable to find data in memory cache]" Feb 16 16:07:14 crc kubenswrapper[4705]: E0216 16:07:14.423353 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:24 crc kubenswrapper[4705]: E0216 16:07:24.422991 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:28 crc kubenswrapper[4705]: E0216 16:07:28.425365 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.684655 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.685496 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.685589 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.686972 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:07:31 crc kubenswrapper[4705]: I0216 16:07:31.687048 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" gracePeriod=600 Feb 16 16:07:31 crc kubenswrapper[4705]: E0216 16:07:31.838534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.419861 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" exitCode=0 Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.441646 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044"} Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.442519 4705 scope.go:117] "RemoveContainer" containerID="314a59c1bcbfd4161cca01cf480806fab16a11ba92c42dabbd75887792f28fb9" Feb 16 16:07:32 crc kubenswrapper[4705]: I0216 16:07:32.443241 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:32 crc kubenswrapper[4705]: E0216 16:07:32.443707 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:37 crc kubenswrapper[4705]: E0216 16:07:37.423498 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:42 crc kubenswrapper[4705]: E0216 16:07:42.423905 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:45 crc kubenswrapper[4705]: I0216 16:07:45.420654 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:45 crc kubenswrapper[4705]: E0216 16:07:45.421855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:07:52 crc kubenswrapper[4705]: E0216 16:07:52.422282 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:07:54 crc kubenswrapper[4705]: E0216 16:07:54.422875 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:07:58 crc kubenswrapper[4705]: I0216 16:07:58.420224 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:07:58 crc kubenswrapper[4705]: E0216 16:07:58.420987 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.501828 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502785 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502806 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502819 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-content" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502826 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-content" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502889 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-utilities" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502901 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="extract-utilities" Feb 16 16:08:00 crc kubenswrapper[4705]: E0216 16:08:00.502924 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.502932 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.503320 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="896e8ac5-e84c-41d6-a6e5-638c9b5cae1c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.503359 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f4cbbf-620e-45be-bb8e-e0b9adf11ad3" containerName="registry-server" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.505555 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.519388 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.609300 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.610006 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.610141 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.713899 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714028 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714527 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-catalog-content\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714617 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc91527-f266-408e-9dad-4ded626632f6-utilities\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.714986 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.741120 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4d7p\" (UniqueName: \"kubernetes.io/projected/ffc91527-f266-408e-9dad-4ded626632f6-kube-api-access-t4d7p\") pod \"community-operators-9zjsp\" (UID: \"ffc91527-f266-408e-9dad-4ded626632f6\") " pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:00 crc kubenswrapper[4705]: I0216 16:08:00.885101 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.433025 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784204 4705 generic.go:334] "Generic (PLEG): container finished" podID="ffc91527-f266-408e-9dad-4ded626632f6" containerID="ae7cf3cd2f47a26ad351f8c456f7e740fd52e36d4a7570bfefa2c8028acc7e73" exitCode=0 Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784286 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerDied","Data":"ae7cf3cd2f47a26ad351f8c456f7e740fd52e36d4a7570bfefa2c8028acc7e73"} Feb 16 16:08:01 crc kubenswrapper[4705]: I0216 16:08:01.784403 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerStarted","Data":"0a4259fbc128ee2d7bf7c2e29feea589ef20f27af6c8c3dae6c3f0c0796fcf6b"} Feb 16 16:08:06 crc kubenswrapper[4705]: I0216 16:08:06.855150 4705 generic.go:334] "Generic (PLEG): container finished" podID="ffc91527-f266-408e-9dad-4ded626632f6" containerID="634d5466f4c08d5c0f3e8701b771a7f27de757b9de7c5e15a184498af2f83b05" exitCode=0 Feb 16 16:08:06 crc kubenswrapper[4705]: I0216 16:08:06.855262 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerDied","Data":"634d5466f4c08d5c0f3e8701b771a7f27de757b9de7c5e15a184498af2f83b05"} Feb 16 16:08:07 crc kubenswrapper[4705]: E0216 16:08:07.428255 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:07 crc kubenswrapper[4705]: E0216 16:08:07.428408 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:07 crc kubenswrapper[4705]: I0216 16:08:07.871468 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9zjsp" event={"ID":"ffc91527-f266-408e-9dad-4ded626632f6","Type":"ContainerStarted","Data":"a91119866673d2c98754bedfce7058d15c91ded7ca173c332b245ae41c080a8b"} Feb 16 16:08:07 crc kubenswrapper[4705]: I0216 16:08:07.902748 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9zjsp" podStartSLOduration=2.453860107 podStartE2EDuration="7.902715639s" podCreationTimestamp="2026-02-16 16:08:00 +0000 UTC" firstStartedPulling="2026-02-16 16:08:01.786585828 +0000 UTC m=+4475.971562904" lastFinishedPulling="2026-02-16 16:08:07.23544136 +0000 UTC m=+4481.420418436" observedRunningTime="2026-02-16 16:08:07.902027539 +0000 UTC m=+4482.087004635" watchObservedRunningTime="2026-02-16 16:08:07.902715639 +0000 UTC m=+4482.087692725" Feb 16 16:08:09 crc kubenswrapper[4705]: I0216 16:08:09.419475 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:09 crc kubenswrapper[4705]: E0216 16:08:09.420087 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.886219 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.888528 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:10 crc kubenswrapper[4705]: I0216 16:08:10.956670 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:12 crc kubenswrapper[4705]: I0216 16:08:12.882580 4705 trace.go:236] Trace[1790360609]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (16-Feb-2026 16:08:11.698) (total time: 1179ms): Feb 16 16:08:12 crc kubenswrapper[4705]: Trace[1790360609]: [1.179522529s] [1.179522529s] END Feb 16 16:08:20 crc kubenswrapper[4705]: I0216 16:08:20.421506 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:20 crc kubenswrapper[4705]: E0216 16:08:20.423147 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:20 crc kubenswrapper[4705]: E0216 16:08:20.423320 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.006846 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9zjsp" Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.194907 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9zjsp"] Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.260566 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.260922 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2v29" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" containerID="cri-o://0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" gracePeriod=2 Feb 16 16:08:21 crc kubenswrapper[4705]: I0216 16:08:21.860484 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999426 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999481 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:21.999770 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") pod \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\" (UID: \"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634\") " Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.002581 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities" (OuterVolumeSpecName: "utilities") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.021461 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm" (OuterVolumeSpecName: "kube-api-access-t8jvm") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "kube-api-access-t8jvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068298 4705 generic.go:334] "Generic (PLEG): container finished" podID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" exitCode=0 Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068362 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068413 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2v29" event={"ID":"f9ff9374-4f3a-4f1c-a741-ca2a34ff2634","Type":"ContainerDied","Data":"abefdacd3131f9637e18b5d6a682929bf8b75c5123f9e2a087bae18c0b3b4aa0"} Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068435 4705 scope.go:117] "RemoveContainer" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.068654 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2v29" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.084040 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" (UID: "f9ff9374-4f3a-4f1c-a741-ca2a34ff2634"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.098802 4705 scope.go:117] "RemoveContainer" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104127 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104305 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.104364 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8jvm\" (UniqueName: \"kubernetes.io/projected/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634-kube-api-access-t8jvm\") on node \"crc\" DevicePath \"\"" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.139054 4705 scope.go:117] "RemoveContainer" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.200955 4705 scope.go:117] "RemoveContainer" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.201564 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": container with ID starting with 0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8 not found: ID does not exist" containerID="0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.201637 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8"} err="failed to get container status \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": rpc error: code = NotFound desc = could not find container \"0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8\": container with ID starting with 0b30a47692846b7e474248e70d5b8b1e9fc70bb329de06d679aa7d6b6fbaadc8 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.201667 4705 scope.go:117] "RemoveContainer" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.202065 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": container with ID starting with 07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703 not found: ID does not exist" containerID="07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202112 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703"} err="failed to get container status \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": rpc error: code = NotFound desc = could not find container \"07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703\": container with ID starting with 07ec53b2839776d2260d3cfc8f45918858bf01e891855c20602962959efa0703 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202138 4705 scope.go:117] "RemoveContainer" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.202438 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": container with ID starting with 08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88 not found: ID does not exist" containerID="08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.202492 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88"} err="failed to get container status \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": rpc error: code = NotFound desc = could not find container \"08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88\": container with ID starting with 08b5f7859339c79f45b3b578747c7de73bf279aec2d58c5054a30bef46a9ca88 not found: ID does not exist" Feb 16 16:08:22 crc kubenswrapper[4705]: E0216 16:08:22.422010 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.444599 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:22 crc kubenswrapper[4705]: I0216 16:08:22.452902 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2v29"] Feb 16 16:08:24 crc kubenswrapper[4705]: I0216 16:08:24.435329 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" path="/var/lib/kubelet/pods/f9ff9374-4f3a-4f1c-a741-ca2a34ff2634/volumes" Feb 16 16:08:33 crc kubenswrapper[4705]: E0216 16:08:33.423251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:35 crc kubenswrapper[4705]: I0216 16:08:35.419737 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:35 crc kubenswrapper[4705]: E0216 16:08:35.420442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:35 crc kubenswrapper[4705]: E0216 16:08:35.421886 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.049450 4705 scope.go:117] "RemoveContainer" containerID="1215060225f5cdf9e6306af8c84f46842dbe2f8e8253cc47d3a4f61e96ef1081" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.114929 4705 scope.go:117] "RemoveContainer" containerID="4d9d713141a3f7aae0f29ba8e808800a207d2293c8b10b72e5b38efe8b4e1b72" Feb 16 16:08:36 crc kubenswrapper[4705]: I0216 16:08:36.141836 4705 scope.go:117] "RemoveContainer" containerID="0c5e900cecec2198ca2b7f8dc95e8434953c226ab2da5841e59c797336ef7673" Feb 16 16:08:45 crc kubenswrapper[4705]: E0216 16:08:45.421156 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:08:46 crc kubenswrapper[4705]: I0216 16:08:46.428777 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:46 crc kubenswrapper[4705]: E0216 16:08:46.429145 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:08:47 crc kubenswrapper[4705]: E0216 16:08:47.423843 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:58 crc kubenswrapper[4705]: E0216 16:08:58.423789 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:08:59 crc kubenswrapper[4705]: I0216 16:08:59.419410 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:08:59 crc kubenswrapper[4705]: E0216 16:08:59.419940 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:00 crc kubenswrapper[4705]: E0216 16:09:00.422776 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:10 crc kubenswrapper[4705]: I0216 16:09:10.420576 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:10 crc kubenswrapper[4705]: E0216 16:09:10.421791 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:13 crc kubenswrapper[4705]: E0216 16:09:13.421523 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:14 crc kubenswrapper[4705]: E0216 16:09:14.420790 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:21 crc kubenswrapper[4705]: I0216 16:09:21.420189 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:21 crc kubenswrapper[4705]: E0216 16:09:21.421032 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:25 crc kubenswrapper[4705]: E0216 16:09:25.422605 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:29 crc kubenswrapper[4705]: E0216 16:09:29.422098 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:36 crc kubenswrapper[4705]: I0216 16:09:36.434167 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:36 crc kubenswrapper[4705]: E0216 16:09:36.435555 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:40 crc kubenswrapper[4705]: E0216 16:09:40.423747 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:42 crc kubenswrapper[4705]: E0216 16:09:42.421683 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:48 crc kubenswrapper[4705]: I0216 16:09:48.420278 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:48 crc kubenswrapper[4705]: E0216 16:09:48.421329 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:09:51 crc kubenswrapper[4705]: E0216 16:09:51.423191 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:09:55 crc kubenswrapper[4705]: E0216 16:09:55.422105 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:09:59 crc kubenswrapper[4705]: I0216 16:09:59.420726 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:09:59 crc kubenswrapper[4705]: E0216 16:09:59.421248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:06 crc kubenswrapper[4705]: E0216 16:10:06.429069 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:08 crc kubenswrapper[4705]: E0216 16:10:08.421995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:12 crc kubenswrapper[4705]: I0216 16:10:12.420482 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:12 crc kubenswrapper[4705]: E0216 16:10:12.422809 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:17 crc kubenswrapper[4705]: E0216 16:10:17.423678 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:21 crc kubenswrapper[4705]: E0216 16:10:21.426594 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:26 crc kubenswrapper[4705]: I0216 16:10:26.441790 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:26 crc kubenswrapper[4705]: E0216 16:10:26.446957 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:29 crc kubenswrapper[4705]: E0216 16:10:29.423254 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:36 crc kubenswrapper[4705]: E0216 16:10:36.438402 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:40 crc kubenswrapper[4705]: I0216 16:10:40.419735 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:40 crc kubenswrapper[4705]: E0216 16:10:40.420397 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:40 crc kubenswrapper[4705]: E0216 16:10:40.421699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:10:50 crc kubenswrapper[4705]: E0216 16:10:50.422939 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:10:55 crc kubenswrapper[4705]: I0216 16:10:55.419520 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:10:55 crc kubenswrapper[4705]: E0216 16:10:55.420832 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:10:55 crc kubenswrapper[4705]: E0216 16:10:55.421830 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:03 crc kubenswrapper[4705]: E0216 16:11:03.430921 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:07 crc kubenswrapper[4705]: E0216 16:11:07.422343 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:10 crc kubenswrapper[4705]: I0216 16:11:10.419981 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:10 crc kubenswrapper[4705]: E0216 16:11:10.421139 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:14 crc kubenswrapper[4705]: I0216 16:11:14.423041 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548762 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548828 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.548983 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:11:14 crc kubenswrapper[4705]: E0216 16:11:14.550184 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.560861 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.561729 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.561878 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:11:19 crc kubenswrapper[4705]: E0216 16:11:19.563218 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:23 crc kubenswrapper[4705]: I0216 16:11:23.420249 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:23 crc kubenswrapper[4705]: E0216 16:11:23.421434 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:28 crc kubenswrapper[4705]: E0216 16:11:28.431099 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:31 crc kubenswrapper[4705]: E0216 16:11:31.424219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:38 crc kubenswrapper[4705]: I0216 16:11:38.421043 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:38 crc kubenswrapper[4705]: E0216 16:11:38.422022 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.409535 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410793 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410843 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410908 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-utilities" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410926 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-utilities" Feb 16 16:11:39 crc kubenswrapper[4705]: E0216 16:11:39.410954 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-content" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.410967 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="extract-content" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.411350 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ff9374-4f3a-4f1c-a741-ca2a34ff2634" containerName="registry-server" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.413424 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.428540 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564191 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564674 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.564701 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669498 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669603 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.669640 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.670111 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:39 crc kubenswrapper[4705]: I0216 16:11:39.670181 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:40 crc kubenswrapper[4705]: I0216 16:11:40.358670 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"redhat-operators-sch5q\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:40 crc kubenswrapper[4705]: I0216 16:11:40.646911 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:41 crc kubenswrapper[4705]: I0216 16:11:41.250640 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:11:41 crc kubenswrapper[4705]: W0216 16:11:41.255406 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e1b6744_00c5_44f1_a5e6_0056eef02141.slice/crio-c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318 WatchSource:0}: Error finding container c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318: Status 404 returned error can't find the container with id c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318 Feb 16 16:11:41 crc kubenswrapper[4705]: I0216 16:11:41.393258 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318"} Feb 16 16:11:42 crc kubenswrapper[4705]: I0216 16:11:42.409039 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" exitCode=0 Feb 16 16:11:42 crc kubenswrapper[4705]: I0216 16:11:42.409484 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086"} Feb 16 16:11:43 crc kubenswrapper[4705]: E0216 16:11:43.422694 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:43 crc kubenswrapper[4705]: I0216 16:11:43.428316 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} Feb 16 16:11:45 crc kubenswrapper[4705]: E0216 16:11:45.423699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:11:48 crc kubenswrapper[4705]: I0216 16:11:48.492498 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" exitCode=0 Feb 16 16:11:48 crc kubenswrapper[4705]: I0216 16:11:48.492644 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} Feb 16 16:11:49 crc kubenswrapper[4705]: I0216 16:11:49.506429 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerStarted","Data":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} Feb 16 16:11:49 crc kubenswrapper[4705]: I0216 16:11:49.536581 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sch5q" podStartSLOduration=4.038745708 podStartE2EDuration="10.536563312s" podCreationTimestamp="2026-02-16 16:11:39 +0000 UTC" firstStartedPulling="2026-02-16 16:11:42.413200479 +0000 UTC m=+4696.598177565" lastFinishedPulling="2026-02-16 16:11:48.911018093 +0000 UTC m=+4703.095995169" observedRunningTime="2026-02-16 16:11:49.533551697 +0000 UTC m=+4703.718528783" watchObservedRunningTime="2026-02-16 16:11:49.536563312 +0000 UTC m=+4703.721540378" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.420544 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:11:50 crc kubenswrapper[4705]: E0216 16:11:50.420849 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.647358 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:50 crc kubenswrapper[4705]: I0216 16:11:50.647482 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:11:51 crc kubenswrapper[4705]: I0216 16:11:51.768810 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:11:51 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:11:51 crc kubenswrapper[4705]: > Feb 16 16:11:56 crc kubenswrapper[4705]: E0216 16:11:56.431028 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:11:59 crc kubenswrapper[4705]: E0216 16:11:59.423239 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:02 crc kubenswrapper[4705]: I0216 16:12:02.419464 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:02 crc kubenswrapper[4705]: E0216 16:12:02.420081 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:02 crc kubenswrapper[4705]: I0216 16:12:02.555137 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:12:02 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:12:02 crc kubenswrapper[4705]: > Feb 16 16:12:11 crc kubenswrapper[4705]: E0216 16:12:11.421485 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:11 crc kubenswrapper[4705]: I0216 16:12:11.698237 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" probeResult="failure" output=< Feb 16 16:12:11 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:12:11 crc kubenswrapper[4705]: > Feb 16 16:12:14 crc kubenswrapper[4705]: E0216 16:12:14.422449 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.671069 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.674290 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.689760 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719505 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719566 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.719672 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822093 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822233 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822866 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.822877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:16 crc kubenswrapper[4705]: I0216 16:12:16.843632 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"certified-operators-6hdb5\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.005954 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.419696 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:17 crc kubenswrapper[4705]: E0216 16:12:17.420463 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.595655 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:17 crc kubenswrapper[4705]: W0216 16:12:17.599715 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dba665a_c068_4b3b_aab8_2f915e391d01.slice/crio-e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45 WatchSource:0}: Error finding container e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45: Status 404 returned error can't find the container with id e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45 Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.830107 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} Feb 16 16:12:17 crc kubenswrapper[4705]: I0216 16:12:17.830164 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45"} Feb 16 16:12:18 crc kubenswrapper[4705]: I0216 16:12:18.847928 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" exitCode=0 Feb 16 16:12:18 crc kubenswrapper[4705]: I0216 16:12:18.848074 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.721713 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.778003 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:20 crc kubenswrapper[4705]: I0216 16:12:20.876656 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} Feb 16 16:12:21 crc kubenswrapper[4705]: I0216 16:12:21.889408 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" exitCode=0 Feb 16 16:12:21 crc kubenswrapper[4705]: I0216 16:12:21.889456 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} Feb 16 16:12:22 crc kubenswrapper[4705]: I0216 16:12:22.901595 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerStarted","Data":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} Feb 16 16:12:22 crc kubenswrapper[4705]: I0216 16:12:22.925673 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hdb5" podStartSLOduration=3.431526008 podStartE2EDuration="6.925653172s" podCreationTimestamp="2026-02-16 16:12:16 +0000 UTC" firstStartedPulling="2026-02-16 16:12:18.853577212 +0000 UTC m=+4733.038554328" lastFinishedPulling="2026-02-16 16:12:22.347704416 +0000 UTC m=+4736.532681492" observedRunningTime="2026-02-16 16:12:22.92557765 +0000 UTC m=+4737.110554726" watchObservedRunningTime="2026-02-16 16:12:22.925653172 +0000 UTC m=+4737.110630248" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.030835 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.031151 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sch5q" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" containerID="cri-o://add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" gracePeriod=2 Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.540819 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.608716 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.608777 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.609008 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") pod \"2e1b6744-00c5-44f1-a5e6-0056eef02141\" (UID: \"2e1b6744-00c5-44f1-a5e6-0056eef02141\") " Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.609783 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities" (OuterVolumeSpecName: "utilities") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.614212 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c" (OuterVolumeSpecName: "kube-api-access-jl42c") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "kube-api-access-jl42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.711998 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.712041 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl42c\" (UniqueName: \"kubernetes.io/projected/2e1b6744-00c5-44f1-a5e6-0056eef02141-kube-api-access-jl42c\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.728643 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e1b6744-00c5-44f1-a5e6-0056eef02141" (UID: "2e1b6744-00c5-44f1-a5e6-0056eef02141"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.814914 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e1b6744-00c5-44f1-a5e6-0056eef02141-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914067 4705 generic.go:334] "Generic (PLEG): container finished" podID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" exitCode=0 Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914135 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sch5q" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914155 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914190 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sch5q" event={"ID":"2e1b6744-00c5-44f1-a5e6-0056eef02141","Type":"ContainerDied","Data":"c1455f25e727ae67a4e3ddffdb45d264644768af439633b61d210e8cef395318"} Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.914211 4705 scope.go:117] "RemoveContainer" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.951967 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.957113 4705 scope.go:117] "RemoveContainer" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.962651 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sch5q"] Feb 16 16:12:23 crc kubenswrapper[4705]: I0216 16:12:23.982626 4705 scope.go:117] "RemoveContainer" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039247 4705 scope.go:117] "RemoveContainer" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.039717 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": container with ID starting with add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb not found: ID does not exist" containerID="add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039764 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb"} err="failed to get container status \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": rpc error: code = NotFound desc = could not find container \"add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb\": container with ID starting with add035332fa086fe92470df1f65cffc8520014f0ddc43dcdae61f6575d35dfdb not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.039799 4705 scope.go:117] "RemoveContainer" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.040687 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": container with ID starting with 8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085 not found: ID does not exist" containerID="8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.040720 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085"} err="failed to get container status \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": rpc error: code = NotFound desc = could not find container \"8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085\": container with ID starting with 8db3aed7ebfb74b2386de286c13418dacec070fb478a68ba6bf69030ebac6085 not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.040741 4705 scope.go:117] "RemoveContainer" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: E0216 16:12:24.041044 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": container with ID starting with 3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086 not found: ID does not exist" containerID="3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.041087 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086"} err="failed to get container status \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": rpc error: code = NotFound desc = could not find container \"3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086\": container with ID starting with 3698cff23db861a3e46065b25b76dea623ea791131fac0980ccbe6dca9b40086 not found: ID does not exist" Feb 16 16:12:24 crc kubenswrapper[4705]: I0216 16:12:24.432090 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" path="/var/lib/kubelet/pods/2e1b6744-00c5-44f1-a5e6-0056eef02141/volumes" Feb 16 16:12:25 crc kubenswrapper[4705]: E0216 16:12:25.421615 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:26 crc kubenswrapper[4705]: E0216 16:12:26.433750 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.006202 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.006335 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:27 crc kubenswrapper[4705]: I0216 16:12:27.211052 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:28 crc kubenswrapper[4705]: I0216 16:12:28.041810 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:29 crc kubenswrapper[4705]: I0216 16:12:29.230563 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:29 crc kubenswrapper[4705]: I0216 16:12:29.419771 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:29 crc kubenswrapper[4705]: E0216 16:12:29.420345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038229 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038848 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-content" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038867 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-content" Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038895 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-utilities" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038902 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="extract-utilities" Feb 16 16:12:30 crc kubenswrapper[4705]: E0216 16:12:30.038937 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.038942 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.039193 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1b6744-00c5-44f1-a5e6-0056eef02141" containerName="registry-server" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.040163 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.043314 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.044551 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.045043 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.046137 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7dkkk" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.074526 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182521 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.182906 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285027 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285103 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.285188 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.291495 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.292276 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.301279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mtzln\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.370814 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:12:30 crc kubenswrapper[4705]: I0216 16:12:30.948211 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln"] Feb 16 16:12:30 crc kubenswrapper[4705]: W0216 16:12:30.964699 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca989d06_e6a2_47cc_abc9_17d4c2740830.slice/crio-f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f WatchSource:0}: Error finding container f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f: Status 404 returned error can't find the container with id f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.015519 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerStarted","Data":"f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f"} Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.015646 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hdb5" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" containerID="cri-o://f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" gracePeriod=2 Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.705766 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722209 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722332 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.722539 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") pod \"6dba665a-c068-4b3b-aab8-2f915e391d01\" (UID: \"6dba665a-c068-4b3b-aab8-2f915e391d01\") " Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.723560 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities" (OuterVolumeSpecName: "utilities") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.758678 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269" (OuterVolumeSpecName: "kube-api-access-7m269") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "kube-api-access-7m269". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.827339 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.827420 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m269\" (UniqueName: \"kubernetes.io/projected/6dba665a-c068-4b3b-aab8-2f915e391d01-kube-api-access-7m269\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.830466 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dba665a-c068-4b3b-aab8-2f915e391d01" (UID: "6dba665a-c068-4b3b-aab8-2f915e391d01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:12:31 crc kubenswrapper[4705]: I0216 16:12:31.930184 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dba665a-c068-4b3b-aab8-2f915e391d01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027012 4705 generic.go:334] "Generic (PLEG): container finished" podID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" exitCode=0 Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027079 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdb5" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.027108 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.028171 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdb5" event={"ID":"6dba665a-c068-4b3b-aab8-2f915e391d01","Type":"ContainerDied","Data":"e746161fc044c75e8acd8117a0333abaa4b06b4ef3fc647c45b24c2d95739d45"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.028222 4705 scope.go:117] "RemoveContainer" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.029522 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerStarted","Data":"8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b"} Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.060332 4705 scope.go:117] "RemoveContainer" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.064511 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" podStartSLOduration=1.5815447969999998 podStartE2EDuration="2.06449504s" podCreationTimestamp="2026-02-16 16:12:30 +0000 UTC" firstStartedPulling="2026-02-16 16:12:30.968223242 +0000 UTC m=+4745.153200328" lastFinishedPulling="2026-02-16 16:12:31.451173495 +0000 UTC m=+4745.636150571" observedRunningTime="2026-02-16 16:12:32.054848148 +0000 UTC m=+4746.239825234" watchObservedRunningTime="2026-02-16 16:12:32.06449504 +0000 UTC m=+4746.249472106" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.090266 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.101828 4705 scope.go:117] "RemoveContainer" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.105599 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hdb5"] Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144027 4705 scope.go:117] "RemoveContainer" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.144785 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": container with ID starting with f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32 not found: ID does not exist" containerID="f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144815 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32"} err="failed to get container status \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": rpc error: code = NotFound desc = could not find container \"f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32\": container with ID starting with f9ef9581c383b205ef02364f53784e094ae97b0b44109d487061d08a5df1ac32 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.144835 4705 scope.go:117] "RemoveContainer" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.145207 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": container with ID starting with e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7 not found: ID does not exist" containerID="e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145242 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7"} err="failed to get container status \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": rpc error: code = NotFound desc = could not find container \"e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7\": container with ID starting with e9a99fdc0a71d2b4ce5564b7befe7330fdadef5d6968898bfc7d6a6c58e801c7 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145255 4705 scope.go:117] "RemoveContainer" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: E0216 16:12:32.145700 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": container with ID starting with 9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188 not found: ID does not exist" containerID="9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.145789 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188"} err="failed to get container status \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": rpc error: code = NotFound desc = could not find container \"9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188\": container with ID starting with 9e8af1e7f81f456903b51e7f965f200273b7374bc78c281017ef1a4913498188 not found: ID does not exist" Feb 16 16:12:32 crc kubenswrapper[4705]: I0216 16:12:32.436810 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" path="/var/lib/kubelet/pods/6dba665a-c068-4b3b-aab8-2f915e391d01/volumes" Feb 16 16:12:36 crc kubenswrapper[4705]: E0216 16:12:36.432665 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:41 crc kubenswrapper[4705]: E0216 16:12:41.423991 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:12:44 crc kubenswrapper[4705]: I0216 16:12:44.421629 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:12:45 crc kubenswrapper[4705]: I0216 16:12:45.179449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} Feb 16 16:12:48 crc kubenswrapper[4705]: E0216 16:12:48.424326 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:12:53 crc kubenswrapper[4705]: E0216 16:12:53.422628 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:00 crc kubenswrapper[4705]: E0216 16:13:00.423185 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:08 crc kubenswrapper[4705]: E0216 16:13:08.425005 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:11 crc kubenswrapper[4705]: E0216 16:13:11.422613 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:19 crc kubenswrapper[4705]: E0216 16:13:19.422904 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:26 crc kubenswrapper[4705]: E0216 16:13:26.432398 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:33 crc kubenswrapper[4705]: E0216 16:13:33.422819 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:39 crc kubenswrapper[4705]: E0216 16:13:39.425516 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:47 crc kubenswrapper[4705]: E0216 16:13:47.423077 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:13:54 crc kubenswrapper[4705]: E0216 16:13:54.422123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:13:58 crc kubenswrapper[4705]: E0216 16:13:58.424935 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:06 crc kubenswrapper[4705]: E0216 16:14:06.433534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:11 crc kubenswrapper[4705]: E0216 16:14:11.423059 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:17 crc kubenswrapper[4705]: E0216 16:14:17.422467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:24 crc kubenswrapper[4705]: E0216 16:14:24.422543 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:32 crc kubenswrapper[4705]: E0216 16:14:32.422013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:35 crc kubenswrapper[4705]: E0216 16:14:35.423588 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:46 crc kubenswrapper[4705]: E0216 16:14:46.435225 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:47 crc kubenswrapper[4705]: E0216 16:14:47.422278 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:14:58 crc kubenswrapper[4705]: E0216 16:14:58.425112 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:14:59 crc kubenswrapper[4705]: E0216 16:14:59.421247 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.261622 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262499 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-content" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262535 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-content" Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262579 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-utilities" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262590 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="extract-utilities" Feb 16 16:15:00 crc kubenswrapper[4705]: E0216 16:15:00.262655 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262666 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.262963 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dba665a-c068-4b3b-aab8-2f915e391d01" containerName="registry-server" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.264174 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.267380 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.277752 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.283880 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422178 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.422317 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.525277 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.525747 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.526044 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.526620 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.632268 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.633324 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"collect-profiles-29520975-v7g57\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:00 crc kubenswrapper[4705]: I0216 16:15:00.915789 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:01 crc kubenswrapper[4705]: W0216 16:15:01.383841 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb25462ce_23b8_42a7_aeda_3a8c72505a1c.slice/crio-8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e WatchSource:0}: Error finding container 8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e: Status 404 returned error can't find the container with id 8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.384564 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57"] Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.684838 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.685243 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969262 4705 generic.go:334] "Generic (PLEG): container finished" podID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerID="8b2f1168697d511f3681e813ab15d4f8950b127d44cb2e7a0f464220baa3ed20" exitCode=0 Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969313 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerDied","Data":"8b2f1168697d511f3681e813ab15d4f8950b127d44cb2e7a0f464220baa3ed20"} Feb 16 16:15:01 crc kubenswrapper[4705]: I0216 16:15:01.969380 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerStarted","Data":"8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e"} Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.403151 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.414299 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.425021 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49" (OuterVolumeSpecName: "kube-api-access-68k49") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "kube-api-access-68k49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.517053 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.519611 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") pod \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\" (UID: \"b25462ce-23b8-42a7-aeda-3a8c72505a1c\") " Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.520828 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.523013 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25462ce-23b8-42a7-aeda-3a8c72505a1c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.523821 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68k49\" (UniqueName: \"kubernetes.io/projected/b25462ce-23b8-42a7-aeda-3a8c72505a1c-kube-api-access-68k49\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.542389 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b25462ce-23b8-42a7-aeda-3a8c72505a1c" (UID: "b25462ce-23b8-42a7-aeda-3a8c72505a1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.625635 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b25462ce-23b8-42a7-aeda-3a8c72505a1c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.988937 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" event={"ID":"b25462ce-23b8-42a7-aeda-3a8c72505a1c","Type":"ContainerDied","Data":"8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e"} Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.988989 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d1a1bd86c89a511d6e7bfc86f8f83f23c259039b6d4d2491f86c22d1c84551e" Feb 16 16:15:03 crc kubenswrapper[4705]: I0216 16:15:03.989111 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520975-v7g57" Feb 16 16:15:04 crc kubenswrapper[4705]: I0216 16:15:04.500529 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 16:15:04 crc kubenswrapper[4705]: I0216 16:15:04.514321 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-xzxs4"] Feb 16 16:15:06 crc kubenswrapper[4705]: I0216 16:15:06.433038 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a4c227-649b-4c63-a135-9e62204fb5e6" path="/var/lib/kubelet/pods/d7a4c227-649b-4c63-a135-9e62204fb5e6/volumes" Feb 16 16:15:10 crc kubenswrapper[4705]: E0216 16:15:10.423601 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:11 crc kubenswrapper[4705]: E0216 16:15:11.422922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:21 crc kubenswrapper[4705]: E0216 16:15:21.423416 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:25 crc kubenswrapper[4705]: E0216 16:15:25.423106 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:31 crc kubenswrapper[4705]: I0216 16:15:31.685068 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:15:31 crc kubenswrapper[4705]: I0216 16:15:31.685737 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:15:33 crc kubenswrapper[4705]: E0216 16:15:33.423047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:36 crc kubenswrapper[4705]: I0216 16:15:36.497810 4705 scope.go:117] "RemoveContainer" containerID="3d19ac739f139aac059dd3041dabf5e11ac0e7c9a2e1687b953e4ecc1918d35b" Feb 16 16:15:38 crc kubenswrapper[4705]: E0216 16:15:38.424251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:47 crc kubenswrapper[4705]: E0216 16:15:47.423252 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:15:52 crc kubenswrapper[4705]: E0216 16:15:52.424094 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:15:59 crc kubenswrapper[4705]: E0216 16:15:59.424879 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684025 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684524 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.684574 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.685461 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:16:01 crc kubenswrapper[4705]: I0216 16:16:01.685516 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" gracePeriod=600 Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.725758 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" exitCode=0 Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726550 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a"} Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726608 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} Feb 16 16:16:02 crc kubenswrapper[4705]: I0216 16:16:02.726632 4705 scope.go:117] "RemoveContainer" containerID="e5ddf684591c559953eecbb52da723c86280230cc1079848f944f701bd846044" Feb 16 16:16:06 crc kubenswrapper[4705]: E0216 16:16:06.432928 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:12 crc kubenswrapper[4705]: E0216 16:16:12.424083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:20 crc kubenswrapper[4705]: I0216 16:16:20.424176 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.562744 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.562867 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.563217 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:16:20 crc kubenswrapper[4705]: E0216 16:16:20.564533 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.507291 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.508051 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.508220 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:16:23 crc kubenswrapper[4705]: E0216 16:16:23.509480 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:32 crc kubenswrapper[4705]: E0216 16:16:32.423743 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:38 crc kubenswrapper[4705]: E0216 16:16:38.425415 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:16:47 crc kubenswrapper[4705]: E0216 16:16:47.421850 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:16:50 crc kubenswrapper[4705]: E0216 16:16:50.422558 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:01 crc kubenswrapper[4705]: E0216 16:17:01.425431 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:04 crc kubenswrapper[4705]: E0216 16:17:04.421728 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:13 crc kubenswrapper[4705]: E0216 16:17:13.423173 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:15 crc kubenswrapper[4705]: E0216 16:17:15.423463 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:28 crc kubenswrapper[4705]: E0216 16:17:28.422131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:29 crc kubenswrapper[4705]: E0216 16:17:29.423042 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:43 crc kubenswrapper[4705]: E0216 16:17:43.422007 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:17:43 crc kubenswrapper[4705]: E0216 16:17:43.422237 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:56 crc kubenswrapper[4705]: E0216 16:17:56.434123 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:17:57 crc kubenswrapper[4705]: E0216 16:17:57.423736 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:09 crc kubenswrapper[4705]: E0216 16:18:09.424013 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:10 crc kubenswrapper[4705]: E0216 16:18:10.424699 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:21 crc kubenswrapper[4705]: E0216 16:18:21.426055 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:25 crc kubenswrapper[4705]: E0216 16:18:25.423187 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:31 crc kubenswrapper[4705]: I0216 16:18:31.684276 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:18:31 crc kubenswrapper[4705]: I0216 16:18:31.684790 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:18:35 crc kubenswrapper[4705]: E0216 16:18:35.423514 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.912607 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:35 crc kubenswrapper[4705]: E0216 16:18:35.913871 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.913924 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.914596 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25462ce-23b8-42a7-aeda-3a8c72505a1c" containerName="collect-profiles" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.918645 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:35 crc kubenswrapper[4705]: I0216 16:18:35.933083 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.076859 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.077495 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.077734 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.181333 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.181507 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.182085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.186130 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.186630 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.216592 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"community-operators-ptlxj\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.262090 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:36 crc kubenswrapper[4705]: I0216 16:18:36.900921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787519 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906" exitCode=0 Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787588 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906"} Feb 16 16:18:37 crc kubenswrapper[4705]: I0216 16:18:37.787623 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478"} Feb 16 16:18:38 crc kubenswrapper[4705]: E0216 16:18:38.426172 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:38 crc kubenswrapper[4705]: I0216 16:18:38.804645 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef"} Feb 16 16:18:39 crc kubenswrapper[4705]: I0216 16:18:39.820836 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef" exitCode=0 Feb 16 16:18:39 crc kubenswrapper[4705]: I0216 16:18:39.820896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef"} Feb 16 16:18:41 crc kubenswrapper[4705]: I0216 16:18:41.849118 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerStarted","Data":"8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c"} Feb 16 16:18:41 crc kubenswrapper[4705]: I0216 16:18:41.882165 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptlxj" podStartSLOduration=4.364152852 podStartE2EDuration="6.882137831s" podCreationTimestamp="2026-02-16 16:18:35 +0000 UTC" firstStartedPulling="2026-02-16 16:18:37.790701891 +0000 UTC m=+5111.975678967" lastFinishedPulling="2026-02-16 16:18:40.30868683 +0000 UTC m=+5114.493663946" observedRunningTime="2026-02-16 16:18:41.873756974 +0000 UTC m=+5116.058734060" watchObservedRunningTime="2026-02-16 16:18:41.882137831 +0000 UTC m=+5116.067114907" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.263226 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.263773 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:46 crc kubenswrapper[4705]: I0216 16:18:46.340172 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:47 crc kubenswrapper[4705]: I0216 16:18:47.001771 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:49 crc kubenswrapper[4705]: E0216 16:18:49.423841 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:18:49 crc kubenswrapper[4705]: I0216 16:18:49.966559 4705 generic.go:334] "Generic (PLEG): container finished" podID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerID="8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b" exitCode=2 Feb 16 16:18:49 crc kubenswrapper[4705]: I0216 16:18:49.966614 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerDied","Data":"8e5d7f431f36f6fd6b00e87e15a1127f73153560045501b572102700a9673a6b"} Feb 16 16:18:50 crc kubenswrapper[4705]: E0216 16:18:50.421761 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.672952 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.676316 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.685212 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733491 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733592 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.733653 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835445 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835524 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.835572 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.836181 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.836257 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:50 crc kubenswrapper[4705]: I0216 16:18:50.866324 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"redhat-marketplace-jm2rk\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.003694 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.576322 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679289 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679328 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.679529 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") pod \"ca989d06-e6a2-47cc-abc9-17d4c2740830\" (UID: \"ca989d06-e6a2-47cc-abc9-17d4c2740830\") " Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.684232 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.689358 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll" (OuterVolumeSpecName: "kube-api-access-stfll") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "kube-api-access-stfll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.723552 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory" (OuterVolumeSpecName: "inventory") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.732504 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca989d06-e6a2-47cc-abc9-17d4c2740830" (UID: "ca989d06-e6a2-47cc-abc9-17d4c2740830"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785775 4705 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785809 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stfll\" (UniqueName: \"kubernetes.io/projected/ca989d06-e6a2-47cc-abc9-17d4c2740830-kube-api-access-stfll\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:51 crc kubenswrapper[4705]: I0216 16:18:51.785820 4705 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca989d06-e6a2-47cc-abc9-17d4c2740830-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.000526 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" exitCode=0 Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.001138 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.001207 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"81f32445518ea8cbafd663f15aa0508e04266932532a988d329155b948f3a4be"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014344 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" event={"ID":"ca989d06-e6a2-47cc-abc9-17d4c2740830","Type":"ContainerDied","Data":"f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f"} Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014423 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cc4882d12cd6bbd6c0b01f14406f9932185608cb29450e7dea3a3ab9e0092f" Feb 16 16:18:52 crc kubenswrapper[4705]: I0216 16:18:52.014501 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mtzln" Feb 16 16:18:53 crc kubenswrapper[4705]: I0216 16:18:53.030587 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} Feb 16 16:18:54 crc kubenswrapper[4705]: I0216 16:18:54.044241 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" exitCode=0 Feb 16 16:18:54 crc kubenswrapper[4705]: I0216 16:18:54.044293 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} Feb 16 16:18:55 crc kubenswrapper[4705]: I0216 16:18:55.062802 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerStarted","Data":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} Feb 16 16:18:55 crc kubenswrapper[4705]: I0216 16:18:55.096570 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jm2rk" podStartSLOduration=2.623505394 podStartE2EDuration="5.096542814s" podCreationTimestamp="2026-02-16 16:18:50 +0000 UTC" firstStartedPulling="2026-02-16 16:18:52.005486898 +0000 UTC m=+5126.190463974" lastFinishedPulling="2026-02-16 16:18:54.478524318 +0000 UTC m=+5128.663501394" observedRunningTime="2026-02-16 16:18:55.084170875 +0000 UTC m=+5129.269147951" watchObservedRunningTime="2026-02-16 16:18:55.096542814 +0000 UTC m=+5129.281519890" Feb 16 16:18:56 crc kubenswrapper[4705]: I0216 16:18:56.463361 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:56 crc kubenswrapper[4705]: I0216 16:18:56.463694 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" containerID="cri-o://8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" gracePeriod=2 Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.650538 4705 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 16 16:18:56 crc kubenswrapper[4705]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 16 16:18:56 crc kubenswrapper[4705]: fail startup Feb 16 16:18:56 crc kubenswrapper[4705]: , stdout: , stderr: , exit code -1 Feb 16 16:18:56 crc kubenswrapper[4705]: > containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.652186 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.653047 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.653184 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.659519 4705 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 16 16:18:56 crc kubenswrapper[4705]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Feb 16 16:18:56 crc kubenswrapper[4705]: fail startup Feb 16 16:18:56 crc kubenswrapper[4705]: , stdout: , stderr: , exit code -1 Feb 16 16:18:56 crc kubenswrapper[4705]: > containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660422 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660926 4705 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 16:18:56 crc kubenswrapper[4705]: E0216 16:18:56.660981 4705 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/community-operators-ptlxj" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.092728 4705 generic.go:334] "Generic (PLEG): container finished" podID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" exitCode=0 Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.092848 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c"} Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.093238 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptlxj" event={"ID":"d2cc514e-4501-4dde-a3ce-442097cf4824","Type":"ContainerDied","Data":"74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478"} Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.093258 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74791c97490a6d2870982096091a8f9775bf5d67f5c84b13bceb4d2757a31478" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.124151 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289218 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289785 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.289896 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") pod \"d2cc514e-4501-4dde-a3ce-442097cf4824\" (UID: \"d2cc514e-4501-4dde-a3ce-442097cf4824\") " Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.290285 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities" (OuterVolumeSpecName: "utilities") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.291646 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.298758 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54" (OuterVolumeSpecName: "kube-api-access-mlf54") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "kube-api-access-mlf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.348831 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2cc514e-4501-4dde-a3ce-442097cf4824" (UID: "d2cc514e-4501-4dde-a3ce-442097cf4824"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.395528 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlf54\" (UniqueName: \"kubernetes.io/projected/d2cc514e-4501-4dde-a3ce-442097cf4824-kube-api-access-mlf54\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:57 crc kubenswrapper[4705]: I0216 16:18:57.395588 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2cc514e-4501-4dde-a3ce-442097cf4824-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.104543 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptlxj" Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.150815 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.159391 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptlxj"] Feb 16 16:18:58 crc kubenswrapper[4705]: I0216 16:18:58.444317 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" path="/var/lib/kubelet/pods/d2cc514e-4501-4dde-a3ce-442097cf4824/volumes" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.004814 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.005887 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.071580 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.214164 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.684658 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:19:01 crc kubenswrapper[4705]: I0216 16:19:01.684737 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:19:02 crc kubenswrapper[4705]: I0216 16:19:02.271749 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:02 crc kubenswrapper[4705]: E0216 16:19:02.424648 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:02 crc kubenswrapper[4705]: E0216 16:19:02.425544 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.166278 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jm2rk" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" containerID="cri-o://8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" gracePeriod=2 Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.720506 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789100 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789418 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.789578 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") pod \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\" (UID: \"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a\") " Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.790219 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities" (OuterVolumeSpecName: "utilities") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.816699 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v" (OuterVolumeSpecName: "kube-api-access-csc6v") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "kube-api-access-csc6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.818305 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" (UID: "c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892622 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892692 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csc6v\" (UniqueName: \"kubernetes.io/projected/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-kube-api-access-csc6v\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:03 crc kubenswrapper[4705]: I0216 16:19:03.892707 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190802 4705 generic.go:334] "Generic (PLEG): container finished" podID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" exitCode=0 Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190866 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jm2rk" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.190876 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.191720 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jm2rk" event={"ID":"c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a","Type":"ContainerDied","Data":"81f32445518ea8cbafd663f15aa0508e04266932532a988d329155b948f3a4be"} Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.191742 4705 scope.go:117] "RemoveContainer" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.228780 4705 scope.go:117] "RemoveContainer" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.251602 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.268565 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jm2rk"] Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.285937 4705 scope.go:117] "RemoveContainer" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.364820 4705 scope.go:117] "RemoveContainer" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.365467 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": container with ID starting with 8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a not found: ID does not exist" containerID="8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365512 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a"} err="failed to get container status \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": rpc error: code = NotFound desc = could not find container \"8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a\": container with ID starting with 8c2b074287aef74a4dc9e2ff60e5d3bb3738ebe88f751097a1e6f90c89760c3a not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365540 4705 scope.go:117] "RemoveContainer" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.365934 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": container with ID starting with 17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c not found: ID does not exist" containerID="17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.365974 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c"} err="failed to get container status \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": rpc error: code = NotFound desc = could not find container \"17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c\": container with ID starting with 17147be1b4ed7f7e877e9ac55421adcc57f3bc94c069d5d691eec4618110052c not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.366006 4705 scope.go:117] "RemoveContainer" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: E0216 16:19:04.366565 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": container with ID starting with 788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731 not found: ID does not exist" containerID="788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.366621 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731"} err="failed to get container status \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": rpc error: code = NotFound desc = could not find container \"788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731\": container with ID starting with 788d4be2dacce32c33c0d8324e0372024f5f76bf5ff0d36c9019ef19a704d731 not found: ID does not exist" Feb 16 16:19:04 crc kubenswrapper[4705]: I0216 16:19:04.432131 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" path="/var/lib/kubelet/pods/c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a/volumes" Feb 16 16:19:17 crc kubenswrapper[4705]: E0216 16:19:17.422241 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:17 crc kubenswrapper[4705]: E0216 16:19:17.422241 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:28 crc kubenswrapper[4705]: E0216 16:19:28.430411 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:31 crc kubenswrapper[4705]: E0216 16:19:31.423100 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684006 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684244 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.684338 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.685300 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:19:31 crc kubenswrapper[4705]: I0216 16:19:31.685364 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" gracePeriod=600 Feb 16 16:19:31 crc kubenswrapper[4705]: E0216 16:19:31.814102 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611755 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" exitCode=0 Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611813 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71"} Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.611860 4705 scope.go:117] "RemoveContainer" containerID="d6523e83b871b3a8268e7ae5f03126a54da965a52357ee40fffddff41dcc4d3a" Feb 16 16:19:32 crc kubenswrapper[4705]: I0216 16:19:32.612990 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:32 crc kubenswrapper[4705]: E0216 16:19:32.613293 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:42 crc kubenswrapper[4705]: E0216 16:19:42.422256 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:43 crc kubenswrapper[4705]: I0216 16:19:43.425953 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:43 crc kubenswrapper[4705]: E0216 16:19:43.426741 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:43 crc kubenswrapper[4705]: E0216 16:19:43.426995 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:19:53 crc kubenswrapper[4705]: E0216 16:19:53.421642 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:19:54 crc kubenswrapper[4705]: E0216 16:19:54.421762 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:19:55 crc kubenswrapper[4705]: I0216 16:19:55.419814 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:19:55 crc kubenswrapper[4705]: E0216 16:19:55.420391 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:04 crc kubenswrapper[4705]: E0216 16:20:04.421532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:06 crc kubenswrapper[4705]: E0216 16:20:06.428577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:10 crc kubenswrapper[4705]: I0216 16:20:10.420212 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:10 crc kubenswrapper[4705]: E0216 16:20:10.420975 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.312775 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.313974 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.313991 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314016 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314025 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314035 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314042 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314068 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314076 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314103 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314113 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314138 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314144 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-content" Feb 16 16:20:14 crc kubenswrapper[4705]: E0216 16:20:14.314156 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314163 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="extract-utilities" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314497 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2cc514e-4501-4dde-a3ce-442097cf4824" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314522 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33d4b38-d6db-4fd4-87fa-3a4a3a7ce89a" containerName="registry-server" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.314538 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca989d06-e6a2-47cc-abc9-17d4c2740830" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.316577 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323393 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bbqmf"/"openshift-service-ca.crt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323804 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bbqmf"/"kube-root-ca.crt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.323625 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bbqmf"/"default-dockercfg-crlth" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.348797 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.429352 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.429452 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.532074 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.532156 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:14 crc kubenswrapper[4705]: I0216 16:20:14.533916 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.230152 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"must-gather-tx2kt\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.243241 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:20:15 crc kubenswrapper[4705]: I0216 16:20:15.822572 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:20:16 crc kubenswrapper[4705]: I0216 16:20:16.118860 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"c6c40e0f334072f7d56c077890f939b9cbaea7957db41512667c103bfd229c9c"} Feb 16 16:20:16 crc kubenswrapper[4705]: E0216 16:20:16.430125 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:21 crc kubenswrapper[4705]: E0216 16:20:21.422496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:23 crc kubenswrapper[4705]: I0216 16:20:23.420330 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:23 crc kubenswrapper[4705]: E0216 16:20:23.430401 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:27 crc kubenswrapper[4705]: E0216 16:20:27.422454 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:29 crc kubenswrapper[4705]: I0216 16:20:29.277022 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd"} Feb 16 16:20:29 crc kubenswrapper[4705]: I0216 16:20:29.277626 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerStarted","Data":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} Feb 16 16:20:30 crc kubenswrapper[4705]: I0216 16:20:30.309535 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" podStartSLOduration=3.55589947 podStartE2EDuration="16.309507568s" podCreationTimestamp="2026-02-16 16:20:14 +0000 UTC" firstStartedPulling="2026-02-16 16:20:15.821280967 +0000 UTC m=+5210.006258043" lastFinishedPulling="2026-02-16 16:20:28.574889065 +0000 UTC m=+5222.759866141" observedRunningTime="2026-02-16 16:20:30.303797987 +0000 UTC m=+5224.488775083" watchObservedRunningTime="2026-02-16 16:20:30.309507568 +0000 UTC m=+5224.494484634" Feb 16 16:20:32 crc kubenswrapper[4705]: E0216 16:20:32.422577 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.824942 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.828889 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.891293 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.891419 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.998817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.999318 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:36 crc kubenswrapper[4705]: I0216 16:20:36.999801 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.051241 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"crc-debug-v5rq9\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.152802 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.361959 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerStarted","Data":"bd2ba4eaf5239f5cbfbeb7f9af95435ccb1822a1e30795a3129148f059a5aa63"} Feb 16 16:20:37 crc kubenswrapper[4705]: I0216 16:20:37.420414 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:37 crc kubenswrapper[4705]: E0216 16:20:37.420895 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:40 crc kubenswrapper[4705]: E0216 16:20:40.424048 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:44 crc kubenswrapper[4705]: E0216 16:20:44.421609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:20:49 crc kubenswrapper[4705]: I0216 16:20:49.420636 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:20:49 crc kubenswrapper[4705]: E0216 16:20:49.421532 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:20:51 crc kubenswrapper[4705]: I0216 16:20:51.544310 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerStarted","Data":"8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2"} Feb 16 16:20:51 crc kubenswrapper[4705]: I0216 16:20:51.579102 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" podStartSLOduration=2.050772156 podStartE2EDuration="15.579079725s" podCreationTimestamp="2026-02-16 16:20:36 +0000 UTC" firstStartedPulling="2026-02-16 16:20:37.217262534 +0000 UTC m=+5231.402239610" lastFinishedPulling="2026-02-16 16:20:50.745570103 +0000 UTC m=+5244.930547179" observedRunningTime="2026-02-16 16:20:51.564409501 +0000 UTC m=+5245.749386597" watchObservedRunningTime="2026-02-16 16:20:51.579079725 +0000 UTC m=+5245.764056801" Feb 16 16:20:55 crc kubenswrapper[4705]: E0216 16:20:55.422607 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:20:58 crc kubenswrapper[4705]: E0216 16:20:58.426819 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:04 crc kubenswrapper[4705]: I0216 16:21:04.419654 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:04 crc kubenswrapper[4705]: E0216 16:21:04.420496 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:08 crc kubenswrapper[4705]: I0216 16:21:08.781703 4705 generic.go:334] "Generic (PLEG): container finished" podID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerID="8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2" exitCode=0 Feb 16 16:21:08 crc kubenswrapper[4705]: I0216 16:21:08.781778 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" event={"ID":"89a1fadc-0734-4725-bd9d-61b8107bfb0a","Type":"ContainerDied","Data":"8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2"} Feb 16 16:21:09 crc kubenswrapper[4705]: I0216 16:21:09.968340 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.015142 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.025903 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-v5rq9"] Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095308 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") pod \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095404 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host" (OuterVolumeSpecName: "host") pod "89a1fadc-0734-4725-bd9d-61b8107bfb0a" (UID: "89a1fadc-0734-4725-bd9d-61b8107bfb0a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.095470 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") pod \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\" (UID: \"89a1fadc-0734-4725-bd9d-61b8107bfb0a\") " Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.096975 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89a1fadc-0734-4725-bd9d-61b8107bfb0a-host\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.104580 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx" (OuterVolumeSpecName: "kube-api-access-558lx") pod "89a1fadc-0734-4725-bd9d-61b8107bfb0a" (UID: "89a1fadc-0734-4725-bd9d-61b8107bfb0a"). InnerVolumeSpecName "kube-api-access-558lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.201530 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558lx\" (UniqueName: \"kubernetes.io/projected/89a1fadc-0734-4725-bd9d-61b8107bfb0a-kube-api-access-558lx\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:10 crc kubenswrapper[4705]: E0216 16:21:10.422481 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.433284 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" path="/var/lib/kubelet/pods/89a1fadc-0734-4725-bd9d-61b8107bfb0a/volumes" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.805128 4705 scope.go:117] "RemoveContainer" containerID="8f9d60d3ff7f4d7d9fa574d150891b9282958c20ce0c2bd53d6b2206b8fed3e2" Feb 16 16:21:10 crc kubenswrapper[4705]: I0216 16:21:10.805512 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-v5rq9" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.282686 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:11 crc kubenswrapper[4705]: E0216 16:21:11.283679 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.283697 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.283993 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a1fadc-0734-4725-bd9d-61b8107bfb0a" containerName="container-00" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.285071 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.438226 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.438571 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.541259 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.541312 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.542085 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.566279 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"crc-debug-qqxfr\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.611918 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:11 crc kubenswrapper[4705]: I0216 16:21:11.821460 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" event={"ID":"8e71011d-2714-45d9-883a-ca78a022c8f2","Type":"ContainerStarted","Data":"64b31e7776f7a0e7c43a83239e07e1c36d867bbe7ee9959a4a732ac0b14ed45a"} Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.836179 4705 generic.go:334] "Generic (PLEG): container finished" podID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerID="91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc" exitCode=1 Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.836272 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" event={"ID":"8e71011d-2714-45d9-883a-ca78a022c8f2","Type":"ContainerDied","Data":"91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc"} Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.875531 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:12 crc kubenswrapper[4705]: I0216 16:21:12.884643 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/crc-debug-qqxfr"] Feb 16 16:21:13 crc kubenswrapper[4705]: E0216 16:21:13.422131 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:13.999842 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.110986 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") pod \"8e71011d-2714-45d9-883a-ca78a022c8f2\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111067 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") pod \"8e71011d-2714-45d9-883a-ca78a022c8f2\" (UID: \"8e71011d-2714-45d9-883a-ca78a022c8f2\") " Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111140 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host" (OuterVolumeSpecName: "host") pod "8e71011d-2714-45d9-883a-ca78a022c8f2" (UID: "8e71011d-2714-45d9-883a-ca78a022c8f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.111681 4705 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e71011d-2714-45d9-883a-ca78a022c8f2-host\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.118602 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95" (OuterVolumeSpecName: "kube-api-access-gmh95") pod "8e71011d-2714-45d9-883a-ca78a022c8f2" (UID: "8e71011d-2714-45d9-883a-ca78a022c8f2"). InnerVolumeSpecName "kube-api-access-gmh95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.215003 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmh95\" (UniqueName: \"kubernetes.io/projected/8e71011d-2714-45d9-883a-ca78a022c8f2-kube-api-access-gmh95\") on node \"crc\" DevicePath \"\"" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.434215 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" path="/var/lib/kubelet/pods/8e71011d-2714-45d9-883a-ca78a022c8f2/volumes" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.864210 4705 scope.go:117] "RemoveContainer" containerID="91a7329eb588d4dc77644c0a8fcd8b34a7c8d1a5b54ab9b07a6ef9b8cd0d72fc" Feb 16 16:21:14 crc kubenswrapper[4705]: I0216 16:21:14.864503 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/crc-debug-qqxfr" Feb 16 16:21:17 crc kubenswrapper[4705]: I0216 16:21:17.420727 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:17 crc kubenswrapper[4705]: E0216 16:21:17.421609 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:25 crc kubenswrapper[4705]: I0216 16:21:25.422413 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.535809 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.535890 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.537206 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:21:25 crc kubenswrapper[4705]: E0216 16:21:25.538485 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556190 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556591 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.556763 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:21:26 crc kubenswrapper[4705]: E0216 16:21:26.557982 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:29 crc kubenswrapper[4705]: I0216 16:21:29.420787 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:29 crc kubenswrapper[4705]: E0216 16:21:29.421473 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:38 crc kubenswrapper[4705]: E0216 16:21:38.421721 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:38 crc kubenswrapper[4705]: E0216 16:21:38.421842 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:42 crc kubenswrapper[4705]: I0216 16:21:42.424791 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:42 crc kubenswrapper[4705]: E0216 16:21:42.425901 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:21:50 crc kubenswrapper[4705]: E0216 16:21:50.429599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:21:50 crc kubenswrapper[4705]: E0216 16:21:50.440979 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:21:55 crc kubenswrapper[4705]: I0216 16:21:55.420518 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:21:55 crc kubenswrapper[4705]: E0216 16:21:55.421411 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:02 crc kubenswrapper[4705]: E0216 16:22:02.421670 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:05 crc kubenswrapper[4705]: E0216 16:22:05.423738 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:08 crc kubenswrapper[4705]: I0216 16:22:08.420139 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:08 crc kubenswrapper[4705]: E0216 16:22:08.421361 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:15 crc kubenswrapper[4705]: E0216 16:22:15.422502 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:19 crc kubenswrapper[4705]: E0216 16:22:19.421552 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.013840 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-api/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.178336 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-evaluator/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.289306 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-listener/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.362707 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_8bb1d6b3-1208-4339-9d67-330c02618823/aodh-notifier/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.400861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-675dd58676-vnqw2_ab2c420d-8288-48f7-b53e-f480bf6d5a7f/barbican-api/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.420811 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:21 crc kubenswrapper[4705]: E0216 16:22:21.421120 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.493861 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-675dd58676-vnqw2_ab2c420d-8288-48f7-b53e-f480bf6d5a7f/barbican-api-log/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.632666 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bf77f7566-frgcc_edea8308-f2c7-4f10-993c-974327a36727/barbican-keystone-listener/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.708471 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bf77f7566-frgcc_edea8308-f2c7-4f10-993c-974327a36727/barbican-keystone-listener-log/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.910052 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c59b585f-gvjjl_eff171da-ce4a-4c88-b7bd-b7b88e6ad322/barbican-worker/0.log" Feb 16 16:22:21 crc kubenswrapper[4705]: I0216 16:22:21.931057 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c59b585f-gvjjl_eff171da-ce4a-4c88-b7bd-b7b88e6ad322/barbican-worker-log/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.097639 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w7c5t_ae6ba4a0-6ae7-42c6-9d27-cb62696d2c85/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.259262 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/ceilometer-notification-agent/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.364586 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/proxy-httpd/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.406800 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0eefb1ac-9933-45ff-a3de-de6a375bef45/sg-core/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.571924 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d09b351a-8da4-4f00-8847-f3461478179f/cinder-api/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.633101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d09b351a-8da4-4f00-8847-f3461478179f/cinder-api-log/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.865649 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c85708f6-f2cb-4248-94e9-7c7763e88275/probe/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.910128 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/init/0.log" Feb 16 16:22:22 crc kubenswrapper[4705]: I0216 16:22:22.971974 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c85708f6-f2cb-4248-94e9-7c7763e88275/cinder-scheduler/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.110010 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/dnsmasq-dns/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.123714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-l9dk8_414f383c-09a6-4895-81cc-e12f73391831/init/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.214602 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49hkn_49d4643c-71ab-4c0f-b3cb-0f494971aa6e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.593737 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bpqkx_5c695fba-8bed-4549-98f9-b708893eab8e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.686021 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-drn5g_447b9ab7-d583-4e71-8eca-fb352e541b13/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:23 crc kubenswrapper[4705]: I0216 16:22:23.839209 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-f9d6j_0b4f3354-7fb7-4031-9c17-270d82f9ece1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.024421 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mtzln_ca989d06-e6a2-47cc-abc9-17d4c2740830/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.091841 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pt7g6_896e8ac5-e84c-41d6-a6e5-638c9b5cae1c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.255695 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tjfwq_df22a5a3-55ac-4d51-99bb-c6624cd8ba8f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.378501 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ef0b445-ec9e-4c58-a7d3-59068664d3ca/glance-httpd/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.537876 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2ef0b445-ec9e-4c58-a7d3-59068664d3ca/glance-log/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.611780 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_28ba576c-ee01-48ea-b78b-a2bea81b90a2/glance-log/0.log" Feb 16 16:22:24 crc kubenswrapper[4705]: I0216 16:22:24.671714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_28ba576c-ee01-48ea-b78b-a2bea81b90a2/glance-httpd/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.388162 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7b7bf99b56-hm6dc_ada71f46-f923-4974-9776-ed92f20c79b1/heat-engine/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.460854 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7986669c9b-q8ghv_08b1576e-92c8-407b-b821-e0cbfe1be11a/heat-api/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.532155 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-65b6d6849b-79456_94fb430a-807d-4e37-bc5a-9b4c75454427/heat-cfnapi/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.627524 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cd49d8b6b-6gdmx_57b8117e-e668-46a4-a652-8ac2b3e5d8ff/keystone-api/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.755307 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520961-75mxg_98bca645-7f96-4667-adb9-cf4c5002ba78/keystone-cron/0.log" Feb 16 16:22:25 crc kubenswrapper[4705]: I0216 16:22:25.808147 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_db5e423c-e590-4e7b-913a-a0a10d55537d/kube-state-metrics/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.122787 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_d40e4f3a-57bb-45e6-997b-39ffc0e497d9/mysqld-exporter/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.527505 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66f94f69bf-82g78_f7edca3b-82f6-4cfb-9781-664afa855ba8/neutron-api/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.653959 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66f94f69bf-82g78_f7edca3b-82f6-4cfb-9781-664afa855ba8/neutron-httpd/0.log" Feb 16 16:22:26 crc kubenswrapper[4705]: I0216 16:22:26.956317 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3f98b0f-bb45-4942-81e0-68e6f2658df5/nova-api-log/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.136929 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4d5bb097-aa56-4b02-942e-70b894afe84a/nova-cell0-conductor-conductor/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.314563 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3f98b0f-bb45-4942-81e0-68e6f2658df5/nova-api-api/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.372620 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_53aeb0ad-0bd6-4b7e-8c67-dd0f8788c55d/nova-cell1-conductor-conductor/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.538015 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b49f6329-2396-4d3e-9b28-2dd3586b1965/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.682626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e121221e-aecf-4425-bb78-e384ce98e73b/nova-metadata-log/0.log" Feb 16 16:22:27 crc kubenswrapper[4705]: I0216 16:22:27.991831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e67e0dd7-af17-4240-ab5a-b6c149913841/nova-scheduler-scheduler/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.173339 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: E0216 16:22:28.428154 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.479752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/galera/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.513190 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_616bbda0-7abf-4cfb-b7f8-f8cca8fb5eab/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.716765 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/mysql-bootstrap/0.log" Feb 16 16:22:28 crc kubenswrapper[4705]: I0216 16:22:28.989769 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/mysql-bootstrap/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.003569 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_50502923-5ef9-46a9-a23d-abe8face6040/galera/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.208237 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4881941b-eb71-45be-aa51-0e8431b29e89/openstackclient/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.306998 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-crbv8_4374b7db-8c42-42e1-b2bd-c633bdd8edfd/ovn-controller/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.538020 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jbdgd_17fdf2f8-bce0-4b08-a8f5-06cfbcd4e772/openstack-network-exporter/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.717753 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e121221e-aecf-4425-bb78-e384ce98e73b/nova-metadata-metadata/0.log" Feb 16 16:22:29 crc kubenswrapper[4705]: I0216 16:22:29.739713 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server-init/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.074118 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server-init/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.149887 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovs-vswitchd/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.203209 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pc9sf_be538ffa-cfea-445d-872f-1a0a68b77a50/ovsdb-server/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.337512 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1ca8a807-8e20-4d12-8355-09c1883163ca/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.386312 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1ca8a807-8e20-4d12-8355-09c1883163ca/ovn-northd/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.579076 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e54f9b0-7b03-46de-8c76-2a37e44a02df/ovsdbserver-nb/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.589964 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1e54f9b0-7b03-46de-8c76-2a37e44a02df/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.777641 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_54e71500-a592-4c97-86c1-4f3f6a4d1b41/openstack-network-exporter/0.log" Feb 16 16:22:30 crc kubenswrapper[4705]: I0216 16:22:30.888149 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_54e71500-a592-4c97-86c1-4f3f6a4d1b41/ovsdbserver-sb/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.047093 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6599894f76-dcwz8_4122899e-95db-413a-ac71-f0574969753a/placement-api/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.076070 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6599894f76-dcwz8_4122899e-95db-413a-ac71-f0574969753a/placement-log/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.171921 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/init-config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.386860 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/prometheus/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.436651 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.507755 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/thanos-sidecar/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.539074 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ed43376-64ee-4fa7-9e24-00d85997e8c1/init-config-reloader/0.log" Feb 16 16:22:31 crc kubenswrapper[4705]: I0216 16:22:31.692884 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.061804 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.128124 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.138598 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_35504e73-1115-4e30-8ef7-95e85f31eaf6/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.303678 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.462376 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_af0e4de4-5af4-4d5c-b2c4-963771612f94/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.517592 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.721845 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/setup-container/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.795884 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_3e86fa10-e583-4f86-97f5-e95ec2c9e9e0/rabbitmq/0.log" Feb 16 16:22:32 crc kubenswrapper[4705]: I0216 16:22:32.853547 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/setup-container/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.064040 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/setup-container/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.076337 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_f3671c78-83d9-45b6-a869-d08abfa12906/rabbitmq/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.136765 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7zg59_c73749fc-8501-405f-bd7e-de9fca2d968a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.380876 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gs5p7_9476bb8c-80dc-4227-bc28-fd6b5fe8f8f0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: E0216 16:22:33.421240 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.587404 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b76884b7-g4c57_811fab8b-dbb5-4985-b67f-d3671ea6ff9b/proxy-server/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.631391 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b76884b7-g4c57_811fab8b-dbb5-4985-b67f-d3671ea6ff9b/proxy-httpd/0.log" Feb 16 16:22:33 crc kubenswrapper[4705]: I0216 16:22:33.707050 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bkfjd_f5297b85-4dcb-4e4d-8b11-fbba54b2b31d/swift-ring-rebalance/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.701647 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-auditor/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.810451 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-server/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.839950 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-replicator/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.848187 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/account-reaper/0.log" Feb 16 16:22:34 crc kubenswrapper[4705]: I0216 16:22:34.980517 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-auditor/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.071538 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-updater/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.087215 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-replicator/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.113287 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/container-server/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.287624 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-auditor/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.398162 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-expirer/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.415682 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-replicator/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.420248 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:35 crc kubenswrapper[4705]: E0216 16:22:35.420608 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.454491 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-server/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.557005 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/object-updater/0.log" Feb 16 16:22:35 crc kubenswrapper[4705]: I0216 16:22:35.668278 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/rsync/0.log" Feb 16 16:22:36 crc kubenswrapper[4705]: I0216 16:22:36.050949 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a1c8c609-3b8c-48d1-9731-56451bf10919/swift-recon-cron/0.log" Feb 16 16:22:40 crc kubenswrapper[4705]: I0216 16:22:40.530226 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_db14762a-eebd-41a0-b107-e879fedc05f1/memcached/0.log" Feb 16 16:22:42 crc kubenswrapper[4705]: E0216 16:22:42.422363 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:22:46 crc kubenswrapper[4705]: I0216 16:22:46.431559 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:22:46 crc kubenswrapper[4705]: E0216 16:22:46.432271 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:22:47 crc kubenswrapper[4705]: E0216 16:22:47.421658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.138169 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:52 crc kubenswrapper[4705]: E0216 16:22:52.139941 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.139959 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.140235 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e71011d-2714-45d9-883a-ca78a022c8f2" containerName="container-00" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.142195 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.170786 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194036 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194499 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.194815 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297367 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297568 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297605 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.297978 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.298076 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.321877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"redhat-operators-rsd44\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:52 crc kubenswrapper[4705]: I0216 16:22:52.469261 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:22:53 crc kubenswrapper[4705]: I0216 16:22:53.132921 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049206 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" exitCode=0 Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049660 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475"} Feb 16 16:22:54 crc kubenswrapper[4705]: I0216 16:22:54.049688 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"5230e4898fc0f99178a19a89acba9e11354fc6b0463fa93c560ea2c9d29a6bde"} Feb 16 16:22:55 crc kubenswrapper[4705]: I0216 16:22:55.066768 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} Feb 16 16:22:55 crc kubenswrapper[4705]: E0216 16:22:55.423471 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.127943 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" exitCode=0 Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.128033 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} Feb 16 16:23:00 crc kubenswrapper[4705]: I0216 16:23:00.419413 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:00 crc kubenswrapper[4705]: E0216 16:23:00.420008 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:01 crc kubenswrapper[4705]: I0216 16:23:01.140478 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerStarted","Data":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} Feb 16 16:23:01 crc kubenswrapper[4705]: I0216 16:23:01.185576 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rsd44" podStartSLOduration=2.715055014 podStartE2EDuration="9.185550138s" podCreationTimestamp="2026-02-16 16:22:52 +0000 UTC" firstStartedPulling="2026-02-16 16:22:54.05189074 +0000 UTC m=+5368.236867816" lastFinishedPulling="2026-02-16 16:23:00.522385864 +0000 UTC m=+5374.707362940" observedRunningTime="2026-02-16 16:23:01.163391302 +0000 UTC m=+5375.348368398" watchObservedRunningTime="2026-02-16 16:23:01.185550138 +0000 UTC m=+5375.370527234" Feb 16 16:23:02 crc kubenswrapper[4705]: E0216 16:23:02.421977 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:02 crc kubenswrapper[4705]: I0216 16:23:02.469667 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:02 crc kubenswrapper[4705]: I0216 16:23:02.469751 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:03 crc kubenswrapper[4705]: I0216 16:23:03.979968 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:03 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:03 crc kubenswrapper[4705]: > Feb 16 16:23:07 crc kubenswrapper[4705]: I0216 16:23:07.989965 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.283901 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.286091 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.317983 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.502377 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/extract/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.512980 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/util/0.log" Feb 16 16:23:08 crc kubenswrapper[4705]: I0216 16:23:08.521669 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a107860892mcw_1e942955-af48-4230-98dd-d8228e586600/pull/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.031202 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-fsx2w_f0b4e27c-91ff-4540-bfff-e6c30849c75f/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.419029 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-xdlbv_59e2a9a8-5a0d-4772-8d9c-b755fcd234be/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: E0216 16:23:09.421479 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.778590 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-f4fgx_5ee1a78f-cea6-443b-9b43-9ed2334c5c9e/manager/0.log" Feb 16 16:23:09 crc kubenswrapper[4705]: I0216 16:23:09.884522 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-q5n45_f1a4206b-818d-49e7-9177-9dc7373ded1c/manager/0.log" Feb 16 16:23:10 crc kubenswrapper[4705]: I0216 16:23:10.459882 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ftdcn_a6d65371-bf15-42b9-857d-c4c7350aa402/manager/0.log" Feb 16 16:23:10 crc kubenswrapper[4705]: I0216 16:23:10.782846 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-xg4dw_9bd1689a-ae93-4ac0-ab21-c899756ef07a/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.135255 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8lztr_34eadd57-e91b-4324-93c0-ede339012ab3/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.318962 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-dnbpd_f06e9156-0c7b-41f6-a1cf-83820a7e7732/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.419458 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:11 crc kubenswrapper[4705]: E0216 16:23:11.419813 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.570361 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-kh759_e73efbc6-26db-4760-b745-3c93c9b2329e/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.883194 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-2vvm8_9f0ad3cb-ac80-4462-bd97-b09f9367dc54/manager/0.log" Feb 16 16:23:11 crc kubenswrapper[4705]: I0216 16:23:11.909172 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-s9vdm_84edc365-fa2c-40bc-ae0e-b71ae094ab27/manager/0.log" Feb 16 16:23:12 crc kubenswrapper[4705]: I0216 16:23:12.240206 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-b6587_8279d837-6ad4-4e2b-a03a-eb0a24a30998/manager/0.log" Feb 16 16:23:12 crc kubenswrapper[4705]: I0216 16:23:12.384554 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ck9xgq_1872b592-a1cc-445a-b75f-f658612dc160/manager/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.037779 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-787c798d66-r8xk2_a8b2ba76-e9d9-404f-9859-22c40c63f1fb/operator/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.115748 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rtf6z_050e9b74-0e40-4a1a-8cb8-1ee038752bb6/registry-server/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.461145 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-hw64s_d4a1c432-7691-472b-80af-caaa6afcacb2/manager/0.log" Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.532476 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:13 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:13 crc kubenswrapper[4705]: > Feb 16 16:23:13 crc kubenswrapper[4705]: I0216 16:23:13.755997 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-vkmgq_794d8603-8fa6-4068-8a38-e0825d42ae3f/manager/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.031576 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5s9ck_d67e5221-5cd4-4659-a41b-5d470f435c3e/operator/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.276546 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6c6fr_ca67e7ec-20a9-4768-ae37-3aa90f721201/manager/0.log" Feb 16 16:23:14 crc kubenswrapper[4705]: I0216 16:23:14.815891 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-bk9rm_c66cb2ee-a6d3-454b-a2ea-a160038b76f6/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.221917 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6ccb9b958b-qbt7j_8d4c4ad7-542f-4d25-a444-7b4752e43f89/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.311829 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b45b684f5-zrvmj_07891331-9fdb-4922-aea1-6a3acf7f656f/manager/0.log" Feb 16 16:23:15 crc kubenswrapper[4705]: E0216 16:23:15.421944 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:15 crc kubenswrapper[4705]: I0216 16:23:15.674187 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-77d2l_d583ac10-9ad2-4f95-9787-74f2cb28c943/manager/0.log" Feb 16 16:23:16 crc kubenswrapper[4705]: I0216 16:23:16.068543 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-zk57l_7373be90-eefb-4c2b-bdbd-a312daef2434/manager/0.log" Feb 16 16:23:23 crc kubenswrapper[4705]: I0216 16:23:23.420469 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-f52r7_1b9942d1-9e1e-436b-8a58-e37d6b55a00b/manager/0.log" Feb 16 16:23:23 crc kubenswrapper[4705]: I0216 16:23:23.524151 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:23:23 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:23:23 crc kubenswrapper[4705]: > Feb 16 16:23:24 crc kubenswrapper[4705]: E0216 16:23:24.422658 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:25 crc kubenswrapper[4705]: I0216 16:23:25.419898 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:25 crc kubenswrapper[4705]: E0216 16:23:25.420974 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:29 crc kubenswrapper[4705]: E0216 16:23:29.423230 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.521579 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.575138 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:32 crc kubenswrapper[4705]: I0216 16:23:32.769952 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:34 crc kubenswrapper[4705]: I0216 16:23:34.509638 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rsd44" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" containerID="cri-o://1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" gracePeriod=2 Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.159139 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.240295 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.240755 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.241040 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") pod \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\" (UID: \"fdc07fe9-1299-4e6c-8178-a7c42b022c7c\") " Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.251898 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities" (OuterVolumeSpecName: "utilities") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.253850 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2" (OuterVolumeSpecName: "kube-api-access-4dfl2") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "kube-api-access-4dfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.344347 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.344410 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dfl2\" (UniqueName: \"kubernetes.io/projected/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-kube-api-access-4dfl2\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.361530 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdc07fe9-1299-4e6c-8178-a7c42b022c7c" (UID: "fdc07fe9-1299-4e6c-8178-a7c42b022c7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.446361 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc07fe9-1299-4e6c-8178-a7c42b022c7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522287 4705 generic.go:334] "Generic (PLEG): container finished" podID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" exitCode=0 Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522336 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522391 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsd44" event={"ID":"fdc07fe9-1299-4e6c-8178-a7c42b022c7c","Type":"ContainerDied","Data":"5230e4898fc0f99178a19a89acba9e11354fc6b0463fa93c560ea2c9d29a6bde"} Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522397 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsd44" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.522411 4705 scope.go:117] "RemoveContainer" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.557345 4705 scope.go:117] "RemoveContainer" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.575106 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.588088 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rsd44"] Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.601400 4705 scope.go:117] "RemoveContainer" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.641537 4705 scope.go:117] "RemoveContainer" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.641994 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": container with ID starting with 1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81 not found: ID does not exist" containerID="1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642025 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81"} err="failed to get container status \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": rpc error: code = NotFound desc = could not find container \"1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81\": container with ID starting with 1c86e0bfc0b4aa99ed9cfb73ff5e0f68fe0ce949da94b04bd8bfd53fcc359d81 not found: ID does not exist" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642047 4705 scope.go:117] "RemoveContainer" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.642388 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": container with ID starting with 46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f not found: ID does not exist" containerID="46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642473 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f"} err="failed to get container status \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": rpc error: code = NotFound desc = could not find container \"46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f\": container with ID starting with 46ade467fbc52c594a7ed851d122e896aa4514ad6bde2efd886920a17e986d0f not found: ID does not exist" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.642549 4705 scope.go:117] "RemoveContainer" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: E0216 16:23:35.643174 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": container with ID starting with 06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475 not found: ID does not exist" containerID="06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475" Feb 16 16:23:35 crc kubenswrapper[4705]: I0216 16:23:35.643196 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475"} err="failed to get container status \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": rpc error: code = NotFound desc = could not find container \"06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475\": container with ID starting with 06e9e75b66d1131ffaa172c415c9073249554193876c8bcc3105abdc0575c475 not found: ID does not exist" Feb 16 16:23:36 crc kubenswrapper[4705]: I0216 16:23:36.433409 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" path="/var/lib/kubelet/pods/fdc07fe9-1299-4e6c-8178-a7c42b022c7c/volumes" Feb 16 16:23:37 crc kubenswrapper[4705]: I0216 16:23:37.419546 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:37 crc kubenswrapper[4705]: E0216 16:23:37.420211 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:37 crc kubenswrapper[4705]: E0216 16:23:37.421158 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.717260 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kqpk2_0b436476-c64b-40ca-a644-1067ccefcecc/control-plane-machine-set-operator/0.log" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.872940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tzm67_b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea/machine-api-operator/0.log" Feb 16 16:23:39 crc kubenswrapper[4705]: I0216 16:23:39.888663 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tzm67_b36bf8ce-b0c7-4aa3-9027-bcefb87e88ea/kube-rbac-proxy/0.log" Feb 16 16:23:43 crc kubenswrapper[4705]: E0216 16:23:43.422186 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:49 crc kubenswrapper[4705]: I0216 16:23:49.419529 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:23:49 crc kubenswrapper[4705]: E0216 16:23:49.420317 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:23:51 crc kubenswrapper[4705]: E0216 16:23:51.423098 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:23:56 crc kubenswrapper[4705]: E0216 16:23:56.432191 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:23:56 crc kubenswrapper[4705]: I0216 16:23:56.959729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-txcpz_ca614a32-6a4c-4802-8cb5-a927aac7a59a/cert-manager-cainjector/0.log" Feb 16 16:23:57 crc kubenswrapper[4705]: I0216 16:23:57.045101 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-46spv_b6695119-142b-40cb-bdd8-e0e1f55e0e61/cert-manager-controller/0.log" Feb 16 16:23:57 crc kubenswrapper[4705]: I0216 16:23:57.185862 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mdqgz_fc1f84cc-974e-42c8-8b49-120dfe74aa0f/cert-manager-webhook/0.log" Feb 16 16:24:01 crc kubenswrapper[4705]: I0216 16:24:01.429306 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:01 crc kubenswrapper[4705]: E0216 16:24:01.430312 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:06 crc kubenswrapper[4705]: E0216 16:24:06.429922 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:07 crc kubenswrapper[4705]: E0216 16:24:07.422312 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.564034 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-hl5c9_303c8298-3e10-49e8-96b1-ed1dafcd23e3/nmstate-console-plugin/0.log" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.839132 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wr89v_9ffb9d03-b8ea-44ff-9397-58b55c367d89/nmstate-handler/0.log" Feb 16 16:24:12 crc kubenswrapper[4705]: I0216 16:24:12.975021 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tnbq4_ed67458f-1875-405e-85a5-2a4f7d54089b/kube-rbac-proxy/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.053626 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tnbq4_ed67458f-1875-405e-85a5-2a4f7d54089b/nmstate-metrics/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.142735 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-h6nzt_b2d83f82-a3e4-4937-8484-5f8174b5d986/nmstate-operator/0.log" Feb 16 16:24:13 crc kubenswrapper[4705]: I0216 16:24:13.254412 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-9kf74_7a87077c-c5fa-4c92-9c08-44dcf11d38c7/nmstate-webhook/0.log" Feb 16 16:24:15 crc kubenswrapper[4705]: I0216 16:24:15.419903 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:15 crc kubenswrapper[4705]: E0216 16:24:15.420912 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:17 crc kubenswrapper[4705]: E0216 16:24:17.422765 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:22 crc kubenswrapper[4705]: E0216 16:24:22.423133 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:26 crc kubenswrapper[4705]: I0216 16:24:26.428361 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:26 crc kubenswrapper[4705]: E0216 16:24:26.429505 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:24:28 crc kubenswrapper[4705]: E0216 16:24:28.423965 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:28 crc kubenswrapper[4705]: I0216 16:24:28.818087 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/kube-rbac-proxy/0.log" Feb 16 16:24:28 crc kubenswrapper[4705]: I0216 16:24:28.863642 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/manager/0.log" Feb 16 16:24:33 crc kubenswrapper[4705]: E0216 16:24:33.424632 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:40 crc kubenswrapper[4705]: I0216 16:24:40.419811 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:24:41 crc kubenswrapper[4705]: I0216 16:24:41.271282 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} Feb 16 16:24:42 crc kubenswrapper[4705]: E0216 16:24:42.422984 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.404725 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-f8kwg_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb/prometheus-operator/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.594773 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_81328a1c-32d6-4ce6-9139-8418d2e8fa52/prometheus-operator-admission-webhook/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.622008 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_b90dedac-68bb-409d-9860-af59c6c7d172/prometheus-operator-admission-webhook/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.813039 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l2rxp_5510c272-cd32-4850-a9fa-daff2e045b92/operator/0.log" Feb 16 16:24:43 crc kubenswrapper[4705]: I0216 16:24:43.885417 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-9hcns_72697fcc-cd94-4ba9-9479-cb5bd82d83ab/observability-ui-dashboards/0.log" Feb 16 16:24:44 crc kubenswrapper[4705]: I0216 16:24:44.029326 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tqj56_8acc36de-d26d-44cd-bad6-d31f0a4a4520/perses-operator/0.log" Feb 16 16:24:44 crc kubenswrapper[4705]: E0216 16:24:44.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:54 crc kubenswrapper[4705]: E0216 16:24:54.425757 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:24:59 crc kubenswrapper[4705]: E0216 16:24:59.422599 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:24:59 crc kubenswrapper[4705]: I0216 16:24:59.942500 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-9x6cn_0c3bde1b-6330-4a53-b0f7-fde6bf7c89f9/cluster-logging-operator/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.112437 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-rv6rf_48e8fbe7-00e3-47dc-bf0b-3c186b6bc6a9/collector/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.189069 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_cd14a989-22ac-46cb-9295-a99e2043542b/loki-compactor/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.530615 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-s8kg2_feb0e04c-e741-4dbe-8c09-94379b736809/loki-distributor/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.587092 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-mzgch_a85ad7e0-59d0-412d-96e1-298020ef9927/opa/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.606206 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-mzgch_a85ad7e0-59d0-412d-96e1-298020ef9927/gateway/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.800503 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-zxt7t_d1223933-4ce9-41dd-9c8a-14a59b540e20/gateway/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.848141 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-84f4bcb569-zxt7t_d1223933-4ce9-41dd-9c8a-14a59b540e20/opa/0.log" Feb 16 16:25:00 crc kubenswrapper[4705]: I0216 16:25:00.988928 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_4cde3c29-9511-489b-9849-468cae07d312/loki-index-gateway/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.091153 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_5a1922a4-a6c5-4187-bcd3-f0e05f3e4fcf/loki-ingester/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.265476 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-rbcrd_dd10ec10-e122-430f-afaf-b0b8222a6b15/loki-querier/0.log" Feb 16 16:25:01 crc kubenswrapper[4705]: I0216 16:25:01.342457 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-mbwk8_8e2f02fa-7b78-49ef-8c1a-f9cf7387e063/loki-query-frontend/0.log" Feb 16 16:25:08 crc kubenswrapper[4705]: E0216 16:25:08.430611 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:14 crc kubenswrapper[4705]: E0216 16:25:14.422088 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.529809 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530706 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-utilities" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530723 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-utilities" Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530757 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530766 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: E0216 16:25:15.530792 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-content" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.530802 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="extract-content" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.531081 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc07fe9-1299-4e6c-8178-a7c42b022c7c" containerName="registry-server" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.535031 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.553788 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.564839 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.565312 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.565540 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.670709 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.670933 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671124 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671200 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.671525 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.701877 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"certified-operators-4fk2w\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:15 crc kubenswrapper[4705]: I0216 16:25:15.892914 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:16 crc kubenswrapper[4705]: I0216 16:25:16.532068 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.713853 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" exitCode=0 Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.714088 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408"} Feb 16 16:25:17 crc kubenswrapper[4705]: I0216 16:25:17.714116 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"217b096c44622a46ad4ed6734a3e3730e80590af979a4af721540c8228924fb7"} Feb 16 16:25:19 crc kubenswrapper[4705]: E0216 16:25:19.422079 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.475729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5p2db_493ad03c-5e3e-4726-9764-272f39f5aa37/kube-rbac-proxy/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.714469 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5p2db_493ad03c-5e3e-4726-9764-272f39f5aa37/controller/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.737896 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.741230 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:19 crc kubenswrapper[4705]: I0216 16:25:19.974075 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.003247 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.023667 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.046421 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.202902 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.246837 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.292646 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.300931 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.520211 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-frr-files/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.525711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-reloader/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.560490 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/cp-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.595989 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/controller/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.713059 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/frr-metrics/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.789508 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/kube-rbac-proxy/0.log" Feb 16 16:25:20 crc kubenswrapper[4705]: I0216 16:25:20.910990 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/kube-rbac-proxy-frr/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.330860 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/reloader/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.454315 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-x4255_751baaae-9090-48b1-9bae-79b7527d6c02/frr-k8s-webhook-server/0.log" Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.759442 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" exitCode=0 Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.759505 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} Feb 16 16:25:21 crc kubenswrapper[4705]: I0216 16:25:21.904148 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76745d596b-4dznb_55ce7b61-e1e6-483d-a84f-7ea168ef9672/manager/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.120940 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75967976b4-q84hp_624f7ca8-2011-4ed6-9ee2-24acddf29390/webhook-server/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.195202 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nbgmf_2536f291-dea1-4673-acf7-9beaffa87817/kube-rbac-proxy/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.304605 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5znjj_06291746-6582-464c-9dff-b4b98a359885/frr/0.log" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.778101 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerStarted","Data":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.831068 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4fk2w" podStartSLOduration=3.081790078 podStartE2EDuration="7.831039705s" podCreationTimestamp="2026-02-16 16:25:15 +0000 UTC" firstStartedPulling="2026-02-16 16:25:17.716015825 +0000 UTC m=+5511.900992901" lastFinishedPulling="2026-02-16 16:25:22.465265452 +0000 UTC m=+5516.650242528" observedRunningTime="2026-02-16 16:25:22.815309271 +0000 UTC m=+5517.000286357" watchObservedRunningTime="2026-02-16 16:25:22.831039705 +0000 UTC m=+5517.016016781" Feb 16 16:25:22 crc kubenswrapper[4705]: I0216 16:25:22.972520 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nbgmf_2536f291-dea1-4673-acf7-9beaffa87817/speaker/0.log" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.893570 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.894167 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:25 crc kubenswrapper[4705]: I0216 16:25:25.949104 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:29 crc kubenswrapper[4705]: E0216 16:25:29.422970 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:33 crc kubenswrapper[4705]: E0216 16:25:33.424328 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:35 crc kubenswrapper[4705]: I0216 16:25:35.972351 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.033086 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.900565 4705 scope.go:117] "RemoveContainer" containerID="f76a2880637ec8e061f810a39410c0ce57f54c2c68714b7a697e5bece42d51ef" Feb 16 16:25:36 crc kubenswrapper[4705]: I0216 16:25:36.961252 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4fk2w" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" containerID="cri-o://afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" gracePeriod=2 Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.346331 4705 scope.go:117] "RemoveContainer" containerID="8ffa3afd67b70ce0b5eb4a3090185efe4e0de6b1ad7376819fad4c7c92359e4c" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.853932 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.856940 4705 scope.go:117] "RemoveContainer" containerID="21669da6af69e10615ec9d9bfd683312766c7eb62e5afb7d2c4d0c330e7be906" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.989570 4705 generic.go:334] "Generic (PLEG): container finished" podID="52d06b15-705b-47a8-8a15-7f41452d5007" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" exitCode=0 Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.989729 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4fk2w" Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991031 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991139 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4fk2w" event={"ID":"52d06b15-705b-47a8-8a15-7f41452d5007","Type":"ContainerDied","Data":"217b096c44622a46ad4ed6734a3e3730e80590af979a4af721540c8228924fb7"} Feb 16 16:25:37 crc kubenswrapper[4705]: I0216 16:25:37.991172 4705 scope.go:117] "RemoveContainer" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001504 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001566 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.001724 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") pod \"52d06b15-705b-47a8-8a15-7f41452d5007\" (UID: \"52d06b15-705b-47a8-8a15-7f41452d5007\") " Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.003269 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities" (OuterVolumeSpecName: "utilities") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.013325 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5" (OuterVolumeSpecName: "kube-api-access-xd4c5") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "kube-api-access-xd4c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.022706 4705 scope.go:117] "RemoveContainer" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.061127 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52d06b15-705b-47a8-8a15-7f41452d5007" (UID: "52d06b15-705b-47a8-8a15-7f41452d5007"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.079542 4705 scope.go:117] "RemoveContainer" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105601 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd4c5\" (UniqueName: \"kubernetes.io/projected/52d06b15-705b-47a8-8a15-7f41452d5007-kube-api-access-xd4c5\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105649 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.105665 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d06b15-705b-47a8-8a15-7f41452d5007-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.110279 4705 scope.go:117] "RemoveContainer" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.111015 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": container with ID starting with afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f not found: ID does not exist" containerID="afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111063 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f"} err="failed to get container status \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": rpc error: code = NotFound desc = could not find container \"afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f\": container with ID starting with afe0bf28188ae00f87c26ecb922bf3f52c8fdf4226b57c9efd3bb206823b2e1f not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111093 4705 scope.go:117] "RemoveContainer" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.111607 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": container with ID starting with cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864 not found: ID does not exist" containerID="cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111650 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864"} err="failed to get container status \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": rpc error: code = NotFound desc = could not find container \"cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864\": container with ID starting with cc036e4167742c4765d77ae80e7bb3882c56f1bef82be4991244a357df179864 not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.111685 4705 scope.go:117] "RemoveContainer" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: E0216 16:25:38.112141 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": container with ID starting with 993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408 not found: ID does not exist" containerID="993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.112172 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408"} err="failed to get container status \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": rpc error: code = NotFound desc = could not find container \"993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408\": container with ID starting with 993a88a9b7b63101f9a1cd56fca5dbc50924b5d94dcaccd0a38c7f5e3b3f9408 not found: ID does not exist" Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.322934 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.337958 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4fk2w"] Feb 16 16:25:38 crc kubenswrapper[4705]: I0216 16:25:38.436077 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" path="/var/lib/kubelet/pods/52d06b15-705b-47a8-8a15-7f41452d5007/volumes" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.549050 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.730776 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.780096 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.784821 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.908711 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/util/0.log" Feb 16 16:25:39 crc kubenswrapper[4705]: I0216 16:25:39.932197 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.025439 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19hmlng_c1187d92-0ea8-46f2-9784-ddea0852aa5f/extract/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.197983 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.408974 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: E0216 16:25:40.422367 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.459152 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.462623 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.640212 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/extract/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.654292 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/pull/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.686284 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08p8spl_0d36f8fb-4d40-48ef-b2af-aee94e39388a/util/0.log" Feb 16 16:25:40 crc kubenswrapper[4705]: I0216 16:25:40.870084 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.008631 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.033481 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.050768 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.252903 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/pull/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.299345 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/util/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.320864 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vldc7_e5b4da77-aea8-42f2-8a75-43943612e0e4/extract/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.517566 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.676606 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.697484 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.697621 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.915964 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-utilities/0.log" Feb 16 16:25:41 crc kubenswrapper[4705]: I0216 16:25:41.935619 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.179862 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.439664 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.452903 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.462815 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.716319 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x6x46_f7cf3246-f6e6-4509-bde8-6f5db1285126/registry-server/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.786731 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-utilities/0.log" Feb 16 16:25:42 crc kubenswrapper[4705]: I0216 16:25:42.843718 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/extract-content/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.138449 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9zjsp_ffc91527-f266-408e-9dad-4ded626632f6/registry-server/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.376176 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.533418 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.533525 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.548632 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.746558 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.753714 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/pull/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.782032 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/extract/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.788049 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hdndd_8035ad9d-50ca-4849-aefe-f1251588793d/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.995752 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:43 crc kubenswrapper[4705]: I0216 16:25:43.999291 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.003831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.261812 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/util/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.280200 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/extract/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.337257 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ghmpd_88197577-5157-4d99-9813-eb3173530b4f/marketplace-operator/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.337925 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatccwp_50f390f7-dc79-47dd-80e2-436b17df094c/pull/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.513945 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.752729 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.765343 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.787999 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.928256 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-utilities/0.log" Feb 16 16:25:44 crc kubenswrapper[4705]: I0216 16:25:44.946780 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.037054 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.239969 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wptq4_3c9c10e6-7615-4597-91c4-4a8c67ccf112/registry-server/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.250831 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.270764 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.313058 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.494362 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-utilities/0.log" Feb 16 16:25:45 crc kubenswrapper[4705]: I0216 16:25:45.519529 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/extract-content/0.log" Feb 16 16:25:46 crc kubenswrapper[4705]: I0216 16:25:46.274094 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dzbk2_615ad81b-0e00-4b06-88eb-970b4e942b56/registry-server/0.log" Feb 16 16:25:48 crc kubenswrapper[4705]: E0216 16:25:48.427727 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:25:54 crc kubenswrapper[4705]: E0216 16:25:54.422016 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.464728 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-f8kwg_59894fc4-090e-4e57-84d9-c6fdbe5f3ceb/prometheus-operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.534480 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-lthbl_b90dedac-68bb-409d-9860-af59c6c7d172/prometheus-operator-admission-webhook/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.539952 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75758f868f-gkzfh_81328a1c-32d6-4ce6-9139-8418d2e8fa52/prometheus-operator-admission-webhook/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.765886 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-l2rxp_5510c272-cd32-4850-a9fa-daff2e045b92/operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.836194 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tqj56_8acc36de-d26d-44cd-bad6-d31f0a4a4520/perses-operator/0.log" Feb 16 16:26:02 crc kubenswrapper[4705]: I0216 16:26:02.846381 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-9hcns_72697fcc-cd94-4ba9-9479-cb5bd82d83ab/observability-ui-dashboards/0.log" Feb 16 16:26:03 crc kubenswrapper[4705]: E0216 16:26:03.443583 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:07 crc kubenswrapper[4705]: E0216 16:26:07.421828 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:17 crc kubenswrapper[4705]: E0216 16:26:17.422618 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:18 crc kubenswrapper[4705]: I0216 16:26:18.096305 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/kube-rbac-proxy/0.log" Feb 16 16:26:18 crc kubenswrapper[4705]: I0216 16:26:18.145807 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6b7769c4bd-hnqwn_e0f8cfad-0639-40d4-8a2c-832935b8cddc/manager/0.log" Feb 16 16:26:20 crc kubenswrapper[4705]: E0216 16:26:20.423703 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:29 crc kubenswrapper[4705]: I0216 16:26:29.421190 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552684 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552767 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.552917 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:26:29 crc kubenswrapper[4705]: E0216 16:26:29.554083 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.543723 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.544275 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.544460 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:26:31 crc kubenswrapper[4705]: E0216 16:26:31.545733 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:36 crc kubenswrapper[4705]: I0216 16:26:36.796351 4705 trace.go:236] Trace[1045464604]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (16-Feb-2026 16:26:35.772) (total time: 1024ms): Feb 16 16:26:36 crc kubenswrapper[4705]: Trace[1045464604]: [1.024282416s] [1.024282416s] END Feb 16 16:26:40 crc kubenswrapper[4705]: E0216 16:26:40.421334 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:26:45 crc kubenswrapper[4705]: E0216 16:26:45.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:26:55 crc kubenswrapper[4705]: E0216 16:26:55.422435 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:00 crc kubenswrapper[4705]: E0216 16:27:00.424420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:01 crc kubenswrapper[4705]: I0216 16:27:01.684088 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:27:01 crc kubenswrapper[4705]: I0216 16:27:01.684542 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:27:10 crc kubenswrapper[4705]: E0216 16:27:10.421071 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:11 crc kubenswrapper[4705]: E0216 16:27:11.433541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:22 crc kubenswrapper[4705]: E0216 16:27:22.421915 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:24 crc kubenswrapper[4705]: E0216 16:27:24.421360 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:31 crc kubenswrapper[4705]: I0216 16:27:31.684018 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:27:31 crc kubenswrapper[4705]: I0216 16:27:31.684464 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:27:35 crc kubenswrapper[4705]: E0216 16:27:35.421739 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:27:39 crc kubenswrapper[4705]: E0216 16:27:39.422093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:50 crc kubenswrapper[4705]: E0216 16:27:50.423093 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:27:50 crc kubenswrapper[4705]: E0216 16:27:50.423222 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.684002 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.684961 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685282 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685826 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:28:01 crc kubenswrapper[4705]: I0216 16:28:01.685889 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" gracePeriod=600 Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683055 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" exitCode=0 Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683147 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140"} Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683514 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerStarted","Data":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} Feb 16 16:28:02 crc kubenswrapper[4705]: I0216 16:28:02.683535 4705 scope.go:117] "RemoveContainer" containerID="52459c3edebb977c4b11e31cfad2df957a366eb1ef8aa2c14020b345bc277b71" Feb 16 16:28:04 crc kubenswrapper[4705]: E0216 16:28:04.422695 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:05 crc kubenswrapper[4705]: E0216 16:28:05.423029 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:16 crc kubenswrapper[4705]: E0216 16:28:16.429724 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.869579 4705 generic.go:334] "Generic (PLEG): container finished" podID="b3941987-2937-407a-a067-3f3af600f1f0" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" exitCode=0 Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.869666 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" event={"ID":"b3941987-2937-407a-a067-3f3af600f1f0","Type":"ContainerDied","Data":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.870883 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:17 crc kubenswrapper[4705]: I0216 16:28:17.987405 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/gather/0.log" Feb 16 16:28:18 crc kubenswrapper[4705]: E0216 16:28:18.422056 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.167966 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.168877 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" containerID="cri-o://8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" gracePeriod=2 Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.182728 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bbqmf/must-gather-tx2kt"] Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.791872 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/copy/0.log" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.792910 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.845591 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") pod \"b3941987-2937-407a-a067-3f3af600f1f0\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.845983 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") pod \"b3941987-2937-407a-a067-3f3af600f1f0\" (UID: \"b3941987-2937-407a-a067-3f3af600f1f0\") " Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.853033 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn" (OuterVolumeSpecName: "kube-api-access-9mmdn") pod "b3941987-2937-407a-a067-3f3af600f1f0" (UID: "b3941987-2937-407a-a067-3f3af600f1f0"). InnerVolumeSpecName "kube-api-access-9mmdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.948793 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mmdn\" (UniqueName: \"kubernetes.io/projected/b3941987-2937-407a-a067-3f3af600f1f0-kube-api-access-9mmdn\") on node \"crc\" DevicePath \"\"" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.967914 4705 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bbqmf_must-gather-tx2kt_b3941987-2937-407a-a067-3f3af600f1f0/copy/0.log" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968300 4705 generic.go:334] "Generic (PLEG): container finished" podID="b3941987-2937-407a-a067-3f3af600f1f0" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" exitCode=143 Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968342 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bbqmf/must-gather-tx2kt" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.968354 4705 scope.go:117] "RemoveContainer" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:27 crc kubenswrapper[4705]: I0216 16:28:27.990887 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.025407 4705 scope.go:117] "RemoveContainer" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:28 crc kubenswrapper[4705]: E0216 16:28:28.026067 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": container with ID starting with 8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd not found: ID does not exist" containerID="8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026129 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd"} err="failed to get container status \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": rpc error: code = NotFound desc = could not find container \"8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd\": container with ID starting with 8c38d32adfebcbe23de5da224b131d3b8abe08a3554c3cec49828c4a1323d2cd not found: ID does not exist" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026170 4705 scope.go:117] "RemoveContainer" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: E0216 16:28:28.026712 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": container with ID starting with f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97 not found: ID does not exist" containerID="f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.026737 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97"} err="failed to get container status \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": rpc error: code = NotFound desc = could not find container \"f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97\": container with ID starting with f1fca7bd1958857f95e5d6ffd3c7c072d41925db770cc69d49e82ec281f4ed97 not found: ID does not exist" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.035555 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b3941987-2937-407a-a067-3f3af600f1f0" (UID: "b3941987-2937-407a-a067-3f3af600f1f0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.052267 4705 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3941987-2937-407a-a067-3f3af600f1f0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 16:28:28 crc kubenswrapper[4705]: I0216 16:28:28.435196 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3941987-2937-407a-a067-3f3af600f1f0" path="/var/lib/kubelet/pods/b3941987-2937-407a-a067-3f3af600f1f0/volumes" Feb 16 16:28:29 crc kubenswrapper[4705]: E0216 16:28:29.421737 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:30 crc kubenswrapper[4705]: E0216 16:28:30.429775 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:40 crc kubenswrapper[4705]: E0216 16:28:40.425999 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:43 crc kubenswrapper[4705]: E0216 16:28:43.422199 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:53 crc kubenswrapper[4705]: E0216 16:28:53.422442 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.084896 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085695 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085710 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085742 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-utilities" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085748 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-utilities" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085770 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085776 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085795 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085801 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: E0216 16:28:54.085814 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-content" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.085820 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="extract-content" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086026 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="copy" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086042 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d06b15-705b-47a8-8a15-7f41452d5007" containerName="registry-server" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.086061 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3941987-2937-407a-a067-3f3af600f1f0" containerName="gather" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.088819 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.112145 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235426 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235605 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.235666 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338641 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338768 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.338817 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.339330 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.339517 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.361406 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"community-operators-kkscm\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:54 crc kubenswrapper[4705]: I0216 16:28:54.417549 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.006639 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.260080 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4"} Feb 16 16:28:55 crc kubenswrapper[4705]: I0216 16:28:55.260132 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b"} Feb 16 16:28:56 crc kubenswrapper[4705]: I0216 16:28:56.273963 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4" exitCode=0 Feb 16 16:28:56 crc kubenswrapper[4705]: I0216 16:28:56.274037 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"c89491e849a1c78fb88eb2dec7be0b61f81986b7258b31d04673e79b9f08e9c4"} Feb 16 16:28:56 crc kubenswrapper[4705]: E0216 16:28:56.427986 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:28:57 crc kubenswrapper[4705]: I0216 16:28:57.288470 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4"} Feb 16 16:28:58 crc kubenswrapper[4705]: I0216 16:28:58.300806 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4" exitCode=0 Feb 16 16:28:58 crc kubenswrapper[4705]: I0216 16:28:58.300911 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"c2af688e2efad623f4e1151e94ea0980442c407ef2fd2854e62922882461f2e4"} Feb 16 16:29:00 crc kubenswrapper[4705]: I0216 16:29:00.348360 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerStarted","Data":"5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c"} Feb 16 16:29:00 crc kubenswrapper[4705]: I0216 16:29:00.386396 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kkscm" podStartSLOduration=3.936724065 podStartE2EDuration="6.386356797s" podCreationTimestamp="2026-02-16 16:28:54 +0000 UTC" firstStartedPulling="2026-02-16 16:28:56.27710176 +0000 UTC m=+5730.462078836" lastFinishedPulling="2026-02-16 16:28:58.726734492 +0000 UTC m=+5732.911711568" observedRunningTime="2026-02-16 16:29:00.369938383 +0000 UTC m=+5734.554915459" watchObservedRunningTime="2026-02-16 16:29:00.386356797 +0000 UTC m=+5734.571333873" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.418083 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.418820 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:04 crc kubenswrapper[4705]: I0216 16:29:04.479618 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:05 crc kubenswrapper[4705]: I0216 16:29:05.481327 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:05 crc kubenswrapper[4705]: I0216 16:29:05.547029 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:06 crc kubenswrapper[4705]: E0216 16:29:06.435464 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:07 crc kubenswrapper[4705]: I0216 16:29:07.429287 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kkscm" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" containerID="cri-o://5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c" gracePeriod=2 Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.450511 4705 generic.go:334] "Generic (PLEG): container finished" podID="b529f129-e471-43ba-a45a-abad696e8aef" containerID="5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c" exitCode=0 Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.450568 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"5432b9a8411669dc9857374edad2f13a6e209c3c68988d0dee8cdb6979b1148c"} Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.451084 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkscm" event={"ID":"b529f129-e471-43ba-a45a-abad696e8aef","Type":"ContainerDied","Data":"3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b"} Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.451101 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b83c9864ca9453bf150f110a6c0809e70e7b4baa3dc50e9a2db811f18961e1b" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.496050 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.648997 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.649144 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.649186 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") pod \"b529f129-e471-43ba-a45a-abad696e8aef\" (UID: \"b529f129-e471-43ba-a45a-abad696e8aef\") " Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.651112 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities" (OuterVolumeSpecName: "utilities") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.657307 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx" (OuterVolumeSpecName: "kube-api-access-9cddx") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "kube-api-access-9cddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.710278 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b529f129-e471-43ba-a45a-abad696e8aef" (UID: "b529f129-e471-43ba-a45a-abad696e8aef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753069 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cddx\" (UniqueName: \"kubernetes.io/projected/b529f129-e471-43ba-a45a-abad696e8aef-kube-api-access-9cddx\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753114 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:08 crc kubenswrapper[4705]: I0216 16:29:08.753126 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b529f129-e471-43ba-a45a-abad696e8aef-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.464010 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkscm" Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.538202 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:09 crc kubenswrapper[4705]: I0216 16:29:09.550787 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kkscm"] Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.435041 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b529f129-e471-43ba-a45a-abad696e8aef" path="/var/lib/kubelet/pods/b529f129-e471-43ba-a45a-abad696e8aef/volumes" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.755628 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756438 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-utilities" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756458 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-utilities" Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756482 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756491 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: E0216 16:29:10.756515 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-content" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756524 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="extract-content" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.756852 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="b529f129-e471-43ba-a45a-abad696e8aef" containerName="registry-server" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.759603 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.780296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.819828 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.819912 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.820094 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923541 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923705 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.923765 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.924417 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.924518 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:10 crc kubenswrapper[4705]: I0216 16:29:10.948979 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"redhat-marketplace-lw65w\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:11 crc kubenswrapper[4705]: I0216 16:29:11.096027 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:11 crc kubenswrapper[4705]: E0216 16:29:11.424719 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:11 crc kubenswrapper[4705]: W0216 16:29:11.667440 4705 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f0330f_6e72_46a8_a663_593543de6aee.slice/crio-9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f WatchSource:0}: Error finding container 9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f: Status 404 returned error can't find the container with id 9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f Feb 16 16:29:11 crc kubenswrapper[4705]: I0216 16:29:11.669726 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.501964 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" exitCode=0 Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.502657 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788"} Feb 16 16:29:12 crc kubenswrapper[4705]: I0216 16:29:12.502900 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f"} Feb 16 16:29:13 crc kubenswrapper[4705]: I0216 16:29:13.516410 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} Feb 16 16:29:14 crc kubenswrapper[4705]: I0216 16:29:14.547251 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" exitCode=0 Feb 16 16:29:14 crc kubenswrapper[4705]: I0216 16:29:14.547678 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} Feb 16 16:29:16 crc kubenswrapper[4705]: I0216 16:29:16.580562 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerStarted","Data":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} Feb 16 16:29:16 crc kubenswrapper[4705]: I0216 16:29:16.612689 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lw65w" podStartSLOduration=4.123868796 podStartE2EDuration="6.612633264s" podCreationTimestamp="2026-02-16 16:29:10 +0000 UTC" firstStartedPulling="2026-02-16 16:29:12.504386275 +0000 UTC m=+5746.689363351" lastFinishedPulling="2026-02-16 16:29:14.993150743 +0000 UTC m=+5749.178127819" observedRunningTime="2026-02-16 16:29:16.599501933 +0000 UTC m=+5750.784479009" watchObservedRunningTime="2026-02-16 16:29:16.612633264 +0000 UTC m=+5750.797610380" Feb 16 16:29:20 crc kubenswrapper[4705]: E0216 16:29:20.427527 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.096761 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.096809 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.177967 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.689576 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:21 crc kubenswrapper[4705]: I0216 16:29:21.747997 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:23 crc kubenswrapper[4705]: I0216 16:29:23.656034 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lw65w" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" containerID="cri-o://8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" gracePeriod=2 Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.366032 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.485614 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.485937 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.486050 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") pod \"31f0330f-6e72-46a8-a663-593543de6aee\" (UID: \"31f0330f-6e72-46a8-a663-593543de6aee\") " Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.487254 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities" (OuterVolumeSpecName: "utilities") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.499724 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth" (OuterVolumeSpecName: "kube-api-access-jdqth") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "kube-api-access-jdqth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.523685 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31f0330f-6e72-46a8-a663-593543de6aee" (UID: "31f0330f-6e72-46a8-a663-593543de6aee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589211 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqth\" (UniqueName: \"kubernetes.io/projected/31f0330f-6e72-46a8-a663-593543de6aee-kube-api-access-jdqth\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589579 4705 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.589685 4705 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31f0330f-6e72-46a8-a663-593543de6aee-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678256 4705 generic.go:334] "Generic (PLEG): container finished" podID="31f0330f-6e72-46a8-a663-593543de6aee" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" exitCode=0 Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678314 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678348 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lw65w" event={"ID":"31f0330f-6e72-46a8-a663-593543de6aee","Type":"ContainerDied","Data":"9d76a09dcc575251f215012dca7f5547840847ae9ebd8bfe8a2fc09a0f5bad4f"} Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678382 4705 scope.go:117] "RemoveContainer" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.678418 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lw65w" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.724482 4705 scope.go:117] "RemoveContainer" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.736477 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.747512 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lw65w"] Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.751446 4705 scope.go:117] "RemoveContainer" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.799426 4705 scope.go:117] "RemoveContainer" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.800388 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": container with ID starting with 8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246 not found: ID does not exist" containerID="8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.800443 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246"} err="failed to get container status \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": rpc error: code = NotFound desc = could not find container \"8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246\": container with ID starting with 8e34db6d01effc88a485f2aeb9735c1cd247637c247c2c7e43779422f4d75246 not found: ID does not exist" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.800472 4705 scope.go:117] "RemoveContainer" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.801522 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": container with ID starting with 935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6 not found: ID does not exist" containerID="935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801569 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6"} err="failed to get container status \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": rpc error: code = NotFound desc = could not find container \"935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6\": container with ID starting with 935aeb7014d8c4187ee7b93e414353146f3c07843bd480ed3fa207b0881e00a6 not found: ID does not exist" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801620 4705 scope.go:117] "RemoveContainer" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: E0216 16:29:24.801901 4705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": container with ID starting with 7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788 not found: ID does not exist" containerID="7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788" Feb 16 16:29:24 crc kubenswrapper[4705]: I0216 16:29:24.801932 4705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788"} err="failed to get container status \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": rpc error: code = NotFound desc = could not find container \"7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788\": container with ID starting with 7c929ada1625c7fe59191bf3508c5e91fffe7a9ba42a4265de75fa5c9917f788 not found: ID does not exist" Feb 16 16:29:25 crc kubenswrapper[4705]: E0216 16:29:25.421138 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:26 crc kubenswrapper[4705]: I0216 16:29:26.437179 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f0330f-6e72-46a8-a663-593543de6aee" path="/var/lib/kubelet/pods/31f0330f-6e72-46a8-a663-593543de6aee/volumes" Feb 16 16:29:33 crc kubenswrapper[4705]: E0216 16:29:33.421600 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:40 crc kubenswrapper[4705]: E0216 16:29:40.423001 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:29:46 crc kubenswrapper[4705]: E0216 16:29:46.438585 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:29:51 crc kubenswrapper[4705]: E0216 16:29:51.422821 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.170530 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172267 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-content" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172291 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-content" Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172315 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-utilities" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172323 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="extract-utilities" Feb 16 16:30:00 crc kubenswrapper[4705]: E0216 16:30:00.172410 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172423 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.172787 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f0330f-6e72-46a8-a663-593543de6aee" containerName="registry-server" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.174211 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.176627 4705 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.178357 4705 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.200496 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.359020 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.359604 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.360687 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463216 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463291 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.463462 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.464714 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.480238 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.481095 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"collect-profiles-29520990-d6znv\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.513904 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:00 crc kubenswrapper[4705]: I0216 16:30:00.984229 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv"] Feb 16 16:30:01 crc kubenswrapper[4705]: I0216 16:30:01.125129 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerStarted","Data":"55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da"} Feb 16 16:30:01 crc kubenswrapper[4705]: E0216 16:30:01.421467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:02 crc kubenswrapper[4705]: I0216 16:30:02.140837 4705 generic.go:334] "Generic (PLEG): container finished" podID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerID="ebe76d7fb2dfcd6bac19a4d7c3d30e97b8f28e75a83763fdd5cf18cc5cda7b9b" exitCode=0 Feb 16 16:30:02 crc kubenswrapper[4705]: I0216 16:30:02.140930 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerDied","Data":"ebe76d7fb2dfcd6bac19a4d7c3d30e97b8f28e75a83763fdd5cf18cc5cda7b9b"} Feb 16 16:30:02 crc kubenswrapper[4705]: E0216 16:30:02.421420 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.570342 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.663932 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664707 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664922 4705 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") pod \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\" (UID: \"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546\") " Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.664983 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume" (OuterVolumeSpecName: "config-volume") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.665779 4705 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.676193 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.678202 4705 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq" (OuterVolumeSpecName: "kube-api-access-qk8zq") pod "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" (UID: "3afa087f-18dc-42cd-a0b8-1ba6ce8bc546"). InnerVolumeSpecName "kube-api-access-qk8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.768715 4705 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk8zq\" (UniqueName: \"kubernetes.io/projected/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-kube-api-access-qk8zq\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:03 crc kubenswrapper[4705]: I0216 16:30:03.768931 4705 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3afa087f-18dc-42cd-a0b8-1ba6ce8bc546-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164435 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" event={"ID":"3afa087f-18dc-42cd-a0b8-1ba6ce8bc546","Type":"ContainerDied","Data":"55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da"} Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164480 4705 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c9ea17e164f2c01540af923b7a4af8ffb0a2aeb49c39a010c04dc5049766da" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.164517 4705 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520990-d6znv" Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.666106 4705 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 16:30:04 crc kubenswrapper[4705]: I0216 16:30:04.676821 4705 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-hpffs"] Feb 16 16:30:06 crc kubenswrapper[4705]: I0216 16:30:06.435754 4705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c99b78-85e9-4a2f-bcc4-76fab1e86ccd" path="/var/lib/kubelet/pods/45c99b78-85e9-4a2f-bcc4-76fab1e86ccd/volumes" Feb 16 16:30:13 crc kubenswrapper[4705]: E0216 16:30:13.421917 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:16 crc kubenswrapper[4705]: E0216 16:30:16.429541 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:25 crc kubenswrapper[4705]: E0216 16:30:25.421720 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:30 crc kubenswrapper[4705]: E0216 16:30:30.422231 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:31 crc kubenswrapper[4705]: I0216 16:30:31.684630 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:30:31 crc kubenswrapper[4705]: I0216 16:30:31.684910 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:30:37 crc kubenswrapper[4705]: E0216 16:30:37.421851 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:38 crc kubenswrapper[4705]: I0216 16:30:38.188650 4705 scope.go:117] "RemoveContainer" containerID="c5799f899046339461728bd5e74a089bc2fd5675a54e2ff521c9c4de9307b408" Feb 16 16:30:44 crc kubenswrapper[4705]: E0216 16:30:44.423215 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:30:51 crc kubenswrapper[4705]: E0216 16:30:51.423747 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:30:58 crc kubenswrapper[4705]: E0216 16:30:58.423179 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:01 crc kubenswrapper[4705]: I0216 16:31:01.684459 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:31:01 crc kubenswrapper[4705]: I0216 16:31:01.685250 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:31:02 crc kubenswrapper[4705]: E0216 16:31:02.425159 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:11 crc kubenswrapper[4705]: E0216 16:31:11.423345 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:14 crc kubenswrapper[4705]: E0216 16:31:14.421466 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:26 crc kubenswrapper[4705]: E0216 16:31:26.431892 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:29 crc kubenswrapper[4705]: E0216 16:31:29.424445 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.683955 4705 patch_prober.go:28] interesting pod/machine-config-daemon-fnnf4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.685232 4705 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.685355 4705 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.686461 4705 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 16:31:31 crc kubenswrapper[4705]: I0216 16:31:31.686613 4705 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerName="machine-config-daemon" containerID="cri-o://8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" gracePeriod=600 Feb 16 16:31:31 crc kubenswrapper[4705]: E0216 16:31:31.813337 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266089 4705 generic.go:334] "Generic (PLEG): container finished" podID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" exitCode=0 Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266140 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" event={"ID":"6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c","Type":"ContainerDied","Data":"8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95"} Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.266180 4705 scope.go:117] "RemoveContainer" containerID="33a5df3c257d84a8166fd06a7d0411c00b5ba907e5ffa85255e7d74010f46140" Feb 16 16:31:32 crc kubenswrapper[4705]: I0216 16:31:32.267281 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:31:32 crc kubenswrapper[4705]: E0216 16:31:32.267758 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:37 crc kubenswrapper[4705]: I0216 16:31:37.423018 4705 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.507448 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.507834 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.508009 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h656h665hfdh689h54dh8chbbhf4h669hbch566h55bh55fhdbh678h566h646h694h5d6h54h54bh55bh59fh8h5dh65fh54ch5f7hdbh5f4h59dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf945,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0eefb1ac-9933-45ff-a3de-de6a375bef45): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:31:37 crc kubenswrapper[4705]: E0216 16:31:37.509251 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.556927 4705 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.557627 4705 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.557783 4705 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-d9lbf_openstack(09e6dd23-2e83-460f-b42f-885bf7af0214): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:31:44 crc kubenswrapper[4705]: E0216 16:31:44.559004 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:46 crc kubenswrapper[4705]: I0216 16:31:46.426592 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:31:46 crc kubenswrapper[4705]: E0216 16:31:46.427229 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:31:48 crc kubenswrapper[4705]: E0216 16:31:48.424350 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:31:59 crc kubenswrapper[4705]: E0216 16:31:59.426047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:31:59 crc kubenswrapper[4705]: E0216 16:31:59.426177 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:00 crc kubenswrapper[4705]: I0216 16:32:00.420967 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:00 crc kubenswrapper[4705]: E0216 16:32:00.421655 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:10 crc kubenswrapper[4705]: E0216 16:32:10.422156 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:13 crc kubenswrapper[4705]: I0216 16:32:13.420886 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:13 crc kubenswrapper[4705]: E0216 16:32:13.422047 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:13 crc kubenswrapper[4705]: E0216 16:32:13.422382 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:22 crc kubenswrapper[4705]: E0216 16:32:22.422708 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:26 crc kubenswrapper[4705]: E0216 16:32:26.433729 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:28 crc kubenswrapper[4705]: I0216 16:32:28.420412 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:28 crc kubenswrapper[4705]: E0216 16:32:28.421215 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:36 crc kubenswrapper[4705]: E0216 16:32:36.432446 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:40 crc kubenswrapper[4705]: E0216 16:32:40.423438 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:42 crc kubenswrapper[4705]: I0216 16:32:42.419442 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:42 crc kubenswrapper[4705]: E0216 16:32:42.420235 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:32:49 crc kubenswrapper[4705]: E0216 16:32:49.426948 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:32:52 crc kubenswrapper[4705]: E0216 16:32:52.423855 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:32:54 crc kubenswrapper[4705]: I0216 16:32:54.419551 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:32:54 crc kubenswrapper[4705]: E0216 16:32:54.420285 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:02 crc kubenswrapper[4705]: E0216 16:33:02.423469 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:05 crc kubenswrapper[4705]: I0216 16:33:05.420278 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:05 crc kubenswrapper[4705]: E0216 16:33:05.421197 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:05 crc kubenswrapper[4705]: E0216 16:33:05.422248 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:14 crc kubenswrapper[4705]: E0216 16:33:14.424288 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:16 crc kubenswrapper[4705]: I0216 16:33:16.437614 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:16 crc kubenswrapper[4705]: E0216 16:33:16.440170 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:16 crc kubenswrapper[4705]: E0216 16:33:16.440380 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:27 crc kubenswrapper[4705]: E0216 16:33:27.424237 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:28 crc kubenswrapper[4705]: E0216 16:33:28.422078 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:29 crc kubenswrapper[4705]: I0216 16:33:29.420499 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:29 crc kubenswrapper[4705]: E0216 16:33:29.421644 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:41 crc kubenswrapper[4705]: E0216 16:33:41.422484 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:33:41 crc kubenswrapper[4705]: E0216 16:33:41.422534 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:43 crc kubenswrapper[4705]: I0216 16:33:43.421543 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:43 crc kubenswrapper[4705]: E0216 16:33:43.422151 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:52 crc kubenswrapper[4705]: E0216 16:33:52.422945 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:33:54 crc kubenswrapper[4705]: I0216 16:33:54.419811 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:33:54 crc kubenswrapper[4705]: E0216 16:33:54.421467 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:33:56 crc kubenswrapper[4705]: E0216 16:33:56.429181 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:06 crc kubenswrapper[4705]: E0216 16:34:06.432685 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:34:07 crc kubenswrapper[4705]: I0216 16:34:07.420677 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:34:07 crc kubenswrapper[4705]: E0216 16:34:07.421006 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:07 crc kubenswrapper[4705]: E0216 16:34:07.421458 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.145616 4705 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:16 crc kubenswrapper[4705]: E0216 16:34:16.147743 4705 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.147763 4705 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.148092 4705 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afa087f-18dc-42cd-a0b8-1ba6ce8bc546" containerName="collect-profiles" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.151708 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.164296 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.288792 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.289024 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.289294 4705 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391449 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391584 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.391675 4705 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.392203 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-utilities\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.392361 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18edbe2f-e5ad-43df-863e-524fabeed67c-catalog-content\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.417325 4705 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjvk\" (UniqueName: \"kubernetes.io/projected/18edbe2f-e5ad-43df-863e-524fabeed67c-kube-api-access-cbjvk\") pod \"redhat-operators-rt2v7\" (UID: \"18edbe2f-e5ad-43df-863e-524fabeed67c\") " pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:16 crc kubenswrapper[4705]: I0216 16:34:16.499360 4705 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.083431 4705 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rt2v7"] Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648579 4705 generic.go:334] "Generic (PLEG): container finished" podID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerID="1fabcd33a4de6ee5edbb119488563d893aee5b3a68182c6cb13f2e91e34c6dbf" exitCode=0 Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648629 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerDied","Data":"1fabcd33a4de6ee5edbb119488563d893aee5b3a68182c6cb13f2e91e34c6dbf"} Feb 16 16:34:17 crc kubenswrapper[4705]: I0216 16:34:17.648910 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"fa5f9737c04dea3d01df2d6a5370925204647cfdbdac5def9bb7f583ed6a048e"} Feb 16 16:34:18 crc kubenswrapper[4705]: I0216 16:34:18.664649 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2"} Feb 16 16:34:19 crc kubenswrapper[4705]: I0216 16:34:19.419978 4705 scope.go:117] "RemoveContainer" containerID="8d6bc574d20aa9c497a30d5025f97a695da67967ccc0cd665c14144343e2ac95" Feb 16 16:34:19 crc kubenswrapper[4705]: E0216 16:34:19.420710 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fnnf4_openshift-machine-config-operator(6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c)\"" pod="openshift-machine-config-operator/machine-config-daemon-fnnf4" podUID="6f92e3ed-2ba8-4202-a1b8-7350fadc1d8c" Feb 16 16:34:20 crc kubenswrapper[4705]: E0216 16:34:20.422253 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="0eefb1ac-9933-45ff-a3de-de6a375bef45" Feb 16 16:34:21 crc kubenswrapper[4705]: E0216 16:34:21.421219 4705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-d9lbf" podUID="09e6dd23-2e83-460f-b42f-885bf7af0214" Feb 16 16:34:23 crc kubenswrapper[4705]: I0216 16:34:23.720734 4705 generic.go:334] "Generic (PLEG): container finished" podID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerID="38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2" exitCode=0 Feb 16 16:34:23 crc kubenswrapper[4705]: I0216 16:34:23.721449 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerDied","Data":"38d65b0d9f35998a8b3461cb0d9299908057fd7d5d0900b602b2dce99fbbb9c2"} Feb 16 16:34:24 crc kubenswrapper[4705]: I0216 16:34:24.733150 4705 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rt2v7" event={"ID":"18edbe2f-e5ad-43df-863e-524fabeed67c","Type":"ContainerStarted","Data":"26e234b2759d2cb6166de47bddcc1e64fc3272dd06910c7334d08af2bfd11d13"} Feb 16 16:34:24 crc kubenswrapper[4705]: I0216 16:34:24.763681 4705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rt2v7" podStartSLOduration=2.273158653 podStartE2EDuration="8.763353603s" podCreationTimestamp="2026-02-16 16:34:16 +0000 UTC" firstStartedPulling="2026-02-16 16:34:17.650644527 +0000 UTC m=+6051.835621603" lastFinishedPulling="2026-02-16 16:34:24.140839457 +0000 UTC m=+6058.325816553" observedRunningTime="2026-02-16 16:34:24.758496306 +0000 UTC m=+6058.943473382" watchObservedRunningTime="2026-02-16 16:34:24.763353603 +0000 UTC m=+6058.948330679" Feb 16 16:34:26 crc kubenswrapper[4705]: I0216 16:34:26.500365 4705 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:26 crc kubenswrapper[4705]: I0216 16:34:26.500872 4705 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rt2v7" Feb 16 16:34:27 crc kubenswrapper[4705]: I0216 16:34:27.553257 4705 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rt2v7" podUID="18edbe2f-e5ad-43df-863e-524fabeed67c" containerName="registry-server" probeResult="failure" output=< Feb 16 16:34:27 crc kubenswrapper[4705]: timeout: failed to connect service ":50051" within 1s Feb 16 16:34:27 crc kubenswrapper[4705]: >